Open Surbhi-Vijay opened 3 months ago
cc @zhztheplayer @zhli1142015
Thank you for reporting.
IIUC The operator itself doesn't seem to be the one that slows down your query? Say the slowest tasks just took 438ms and 166ms.
Would you like to share more parts of comparisons of the DAGs? Especially, would you check shuffle write time as well?
So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.
The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.
@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.
Thanks @FelixYBW for explaining. I am trying to come up with a minimal query to showcase the impact. If I do not achieve the same then will post the detailed analysis of Q64 and Q24.
We investigated the Regressing queries. We see that the regression is not directly caused by "VeloxAppendBatches" but rather due to plan changes when data size is reduced due to this feature.
Analysis for query24b When the coalesceBatch feature is enabled, the data size get reduced which prompted AQE to change join type from SHJ to BHJ.
Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled
here, ColumnarBroadcastExchange is taking additional 8s after the ColumnarExchange.
At another place, we observe that one of the shuffle hash join build side got changed due to the same reason.
@zhztheplayer @marin-ma Why the patch caused the plan changing? Looks a bug. Do we use batch numbers instead of row numbers as the plan creating?
So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.
The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.
@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.
Hi! Could you share the profiling tool in this image? Thanks!
@Surbhi-Vijay
Was there a large difference on shuffle write size?
@Surbhi-Vijay Could you share the metrics details of ColumnarExchange
with "VeloxAppendBatches" enabled/disabled?
@Surbhi-Vijay Could you share the metrics details of
ColumnarExchange
with "VeloxAppendBatches" enabled/disabled?
Below "ColumnarExchange" is for join (store_sales join customer) which converted to BHJ from SHJ in q24b when veloxAppendBatches is enabled.
Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled
@Surbhi-Vijay
Was there a large difference on shuffle write size?
It is reporting same numbers, does not seems to be any diff in shuffle write size.
@Surbhi-Vijay The "data size" metric changed from 58.7M to 5.2M. This could cause a plan change since the Join operation relies on this value to decide whether to use BHJ. However, a 10x reduction in "data size" seems unreasonable to me.
Could you also share the spark configurations? I've compared TPCDS q24b with enable/disable VeloxResizeBatches, but I don't see such a stage producing different data size.
@Surbhi-Vijay Any update? It doesn't make sense the merge batch operators impact the shuffle data size.
@FelixYBW @marin-ma I see this behavior of reduced data size wherever VeloxAppendBatches
is getting applied.
All other metrics (apart from datasize
) are almost same (except #batches and #rows/batch - which are expected to change)
The shuffle stage also shows the almost exact same metrices. At this point, I am suspecting if there is any bug in populating data size when this feature is enabled.
Do we have a solution for this? Does https://github.com/apache/incubator-gluten/pull/6670 solve this issue?
Do we have a solution for this? Does #6670 solve this issue?
I think so. Would you like to help have a try? If it works then we can close this issue.
cc @Surbhi-Vijay
Backend
VL (Velox)
Bug description
We have observed performance drop in TPCDS runs after the patch #6009.
Top regressing Queries
QueryId New runtime Previous Runtime query64 50712 22841 query24a 44883 27452 query24b 45003 28742
When we disabled the feature using
"spark.gluten.sql.columnar.backend.velox.coalesceBatchesBeforeShuffle": "false"
. We see the same runtime as previous runs.We are using azure cluster and reading data from remote storage account. The regression is seen in
VeloxAppendBatches
where in some instances, it is taking a lot of timeBelow are the plan snippets from query64
Spark version
Spark-3.4.x
Spark configurations
No response
System information
No response
Relevant logs
No response