Open andygrove opened 2 weeks ago
Good finding. I think this kind of optimization should be in Spark optimizer instead.
I remember Spark SQL has corresponding optimization rule. But not sure why it doesn't affect the query.
Related to this, it would be nice if we could improve the metrics for CometHashAggregate to show the time for evaluating the aggregate input expressions. I am not sure how much work that would be though.
Related to this, it would be nice if we could improve the metrics for CometHashAggregate to show the time for evaluating the aggregate input expressions. I am not sure how much work that would be though.
Sounds good. It should be added into DataFusion hash aggregate operator.
Good finding. I think this kind of optimization should be in Spark optimizer instead.
It would make sense for Spark to add this, but I think that it could also be beneficial for DataFusion to support this as a physical optimizer rule.
Spark has it, but not at the plan level. Instead they do it as part of their code generation: https://github.com/apache/spark/blob/v3.5.3/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala#L1064-L1098C1
What is the problem the feature request solves?
When running TPC-H q1 in Spark/Comet, the expression
l_extendedprice#21 * (1 - l_discount#22)
appears twice in the query and currently gets evaluated twice. This could be optimized out so that it is only evaluated once. I was able to test this by manually rewriting the query.Original Query
Optimized Query
Timings (Original)
Timings (Optimized)
Spark UI (Original)
Spark UI (Optimized)
Describe the potential solution
No response
Additional context
No response