Cache-Efficient Top-k Aggregation over High Cardinality Large Datasets
- Tarique Siddiqui ,
- Vivek Narasayya ,
- Marius Dumitru ,
- Surajit Chaudhuri
Proceedings of the VLDB Endowment (VLDB 2024) |
Top-k aggregation queries are widely used in data analytics for summarizing and identifying important groups from large amounts of data. These queries are usually processed by first computing exact aggregates for all groups and then selecting the groups with the top-k aggregate values. However, such an approach can be inefficient for high-cardinality large datasets where intermediate results may not fit within the local cache of multi-core processors leading to excessive data movement. To address this problem, we have developed Zippy, a new cache-conscious aggregation framework that leverages the skew in the data distribution to minimize data movements. This is achieved by designing cache-resident data structures and an adaptive multi-pass algorithm that quickly identifies candidate groups during processing, and performs exact aggregations for these groups. The non-candidate groups are pruned cheaply using efficient hashing and partitioning techniques without performing exact aggregations. We develop techniques to improve robustness over adversarial data distributions and have optimized the framework to reuse computations incrementally for rolling (or paginated) top-k aggregate queries. Our extensive evaluation using both real-world and synthetic datasets demonstrate that Zippy can achieve a median speed-up of more than 3x for monotonic aggregation functions across typical ranges of k values (e.g., 1 to 100) and 1.4x for non-monotonic functions when compared with state-of-the-art cache-conscious aggregation techniques.