apache / incubator-gluten

Gluten is a middle layer responsible for offloading JVM-based SQL engines' execution to native engines.
https://gluten.apache.org/
Apache License 2.0
1.14k stars 411 forks source link

[VL] TPCDS Performance drop after new operator "VeloxAppendBatches" #6694

Open Surbhi-Vijay opened 1 month ago

Surbhi-Vijay commented 1 month ago

Backend

VL (Velox)

Bug description

We have observed performance drop in TPCDS runs after the patch #6009.

Top regressing Queries

QueryId New runtime Previous Runtime query64 50712 22841 query24a 44883 27452 query24b 45003 28742

When we disabled the feature using "spark.gluten.sql.columnar.backend.velox.coalesceBatchesBeforeShuffle": "false". We see the same runtime as previous runs.

We are using azure cluster and reading data from remote storage account. The regression is seen in VeloxAppendBatches where in some instances, it is taking a lot of time

Below are the plan snippets from query64

image

image

Spark version

Spark-3.4.x

Spark configurations

No response

System information

No response

Relevant logs

No response

Surbhi-Vijay commented 1 month ago

cc @zhztheplayer @zhli1142015

zhztheplayer commented 1 month ago

Thank you for reporting.

IIUC The operator itself doesn't seem to be the one that slows down your query? Say the slowest tasks just took 438ms and 166ms.

Would you like to share more parts of comparisons of the DAGs? Especially, would you check shuffle write time as well?

FelixYBW commented 1 month ago

So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.

The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.

@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.

image

Surbhi-Vijay commented 1 month ago

Thanks @FelixYBW for explaining. I am trying to come up with a minimal query to showcase the impact. If I do not achieve the same then will post the detailed analysis of Q64 and Q24.

Surbhi-Vijay commented 1 month ago

We investigated the Regressing queries. We see that the regression is not directly caused by "VeloxAppendBatches" but rather due to plan changes when data size is reduced due to this feature.

Analysis for query24b When the coalesceBatch feature is enabled, the data size get reduced which prompted AQE to change join type from SHJ to BHJ.

Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled

image

here, ColumnarBroadcastExchange is taking additional 8s after the ColumnarExchange. image

At another place, we observe that one of the shuffle hash join build side got changed due to the same reason.

FelixYBW commented 1 month ago

@zhztheplayer @marin-ma Why the patch caused the plan changing? Looks a bug. Do we use batch numbers instead of row numbers as the plan creating?

DamonZhao-sfu commented 3 weeks ago

So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test.

The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this.

@zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native.

image

Hi! Could you share the profiling tool in this image? Thanks!

zhztheplayer commented 3 weeks ago

@Surbhi-Vijay

Was there a large difference on shuffle write size?

marin-ma commented 3 weeks ago

@Surbhi-Vijay Could you share the metrics details of ColumnarExchange with "VeloxAppendBatches" enabled/disabled?

Surbhi-Vijay commented 3 weeks ago

@Surbhi-Vijay Could you share the metrics details of ColumnarExchange with "VeloxAppendBatches" enabled/disabled?

Below "ColumnarExchange" is for join (store_sales join customer) which converted to BHJ from SHJ in q24b when veloxAppendBatches is enabled.

Left Side => With CoalesceBatch enabled Right Side => With CoalesceBatch disabled image

Surbhi-Vijay commented 3 weeks ago

@Surbhi-Vijay

Was there a large difference on shuffle write size?

It is reporting same numbers, does not seems to be any diff in shuffle write size. image

marin-ma commented 3 weeks ago

@Surbhi-Vijay The "data size" metric changed from 58.7M to 5.2M. This could cause a plan change since the Join operation relies on this value to decide whether to use BHJ. However, a 10x reduction in "data size" seems unreasonable to me.

Could you also share the spark configurations? I've compared TPCDS q24b with enable/disable VeloxResizeBatches, but I don't see such a stage producing different data size.

FelixYBW commented 2 weeks ago

@Surbhi-Vijay Any update? It doesn't make sense the merge batch operators impact the shuffle data size.

Surbhi-Vijay commented 2 weeks ago

@FelixYBW @marin-ma I see this behavior of reduced data size wherever VeloxAppendBatches is getting applied.

All other metrics (apart from datasize) are almost same (except #batches and #rows/batch - which are expected to change)

The shuffle stage also shows the almost exact same metrices. At this point, I am suspecting if there is any bug in populating data size when this feature is enabled.