Skip to content

Memory is coupled to group by cardinality, even when the aggregate output is truncated by a limit clause #7191

Closed
@avantgardnerio

Description

@avantgardnerio

Is your feature request related to a problem or challenge?

Currently, there is only one Aggregation: GroupedHashAggregateStream. It does a lovely job, but it allocates memory for every unique group by value.

For large datasets, this can cause OOM errors, even if the very next operation is a sort by max(x) limit y.

Describe the solution you'd like

I would like to add a GroupedAggregateStream based on a PriorityQueue of grouped values that can be used instead of GroupedHashAggregateStream under the specific conditions above, so that Top K queries work even on datasets with cardinality larger than available memory.

Describe alternatives you've considered

A more generalized implementation where we:

  1. sort by group_val
  2. aggregate by group_val emiting rows in a stream as the aggregate for each group is computed
  3. feed that into a (new) generalized TopKExec node that is only responsible for doing the top K operation

Unfortunately, despite being more general, I'm told that this approach will still OOM in our case.

Additional context

Please see the following similar (but not same) tickets for related top K issues:

  1. Top-K query optimization in sort uses substantial memory  #7149
  2. Improve Memory usage + performance with large numbers of groups / High Cardinality Aggregates #6937
  3. Improve aggregate performance with specialized groups accumulator for single string group by #7064
  4. Optimize "per partition" top-k : ROW_NUMBER < 5 / TopK #6899

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions