Open sunchao opened 1 year ago
@sunchao that sounds very exciting 🚀
Hi @sunchao, are you still moving it forward?
@doki23 yes I'm still planning to. I have a POC branch for this work: https://github.com/sunchao/arrow-datafusion/commits/vectorized-hash-table/ but I haven't got time to go back to it yet. Will try to do it soon.
@doki23 yes I'm still planning to. I have a POC branch for this work: https://github.com/sunchao/arrow-datafusion/commits/vectorized-hash-table/ but I haven't got time to go back to it yet. Will try to do it soon.
Would like to explore more on this ticket. should I start after this branch?
@viirya / @sunchao / @andygrove / @kazuyukitanimura , this idea came up in the context of https://github.com/apache/datafusion/pull/11943 -- I wonder if you have any updates or code we might be able to look at
@Lordworms @alamb the code is still in my branch: https://github.com/sunchao/arrow-datafusion/commits/vectorized-hash-table/ and hasn't been updated for a while, so not sure how easy it can rebase.
One missing piece for the work is to eliminate the convert_columns
and convert_rows
cost. Last time I measured it took ~30% of the total time. For this, I think we need to extend the Array builder to add a get API too, so that while we are building the group values (which is represented via Arrow vectors) we can still look into the elements in various operations such as equality check.
I had tried the code in the branch but doesn't see the performance improvement (and slightly slower), I think RowConverter is what matters.
Yes, that matches with my last measurement too. In my last company we had an internal implementation which bypasses the row-column conversion, and it showed good improvements over the current implementation.
Is your feature request related to a problem or challenge?
Currently DF uses a
RawTable
from hashbrown as the hash table implementation in group aggregations. This requires first converting the input batches into a row format, and then process the converted rows one by one, does hash probing, equality check, as well as creating new entries accordingly.A different approach, as discussed in the Photon paper (and is also used by DuckDB), is to adopt a new vectorized approach in the hash table design, so that each of the above steps can be vectorized. In addition this allows us to skip the row conversion and directly operates on the input batches.
Internally we have a draft implementation for this and it has shown considerable improvements (even without SIMD, although with a lot of
unsafe
s 😂 ) on top of the current hash aggregation approach, so we'd like to contribute to DF and see if it can help to improve its aggregation performance even further.Describe the solution you'd like
Design & implement a separate vectorized hash table. It can either replace the existing
RawTable
insideGroupValuesRows
, or we can have a separateGroupValues
implementation.Describe alternatives you've considered
Not to implement this.
Additional context
No response