apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
5.23k stars 2.39k forks source link

[SUPPORT]Performance degrade for migrating from Hudi 0.7 to Hudi 0.14 #11274

Closed bibhu107 closed 3 weeks ago

bibhu107 commented 1 month ago

Hi Team,

I am upgrading my Spark EMR jobs FROM [Spark 2.4.8, EMR-5.36.1, Hudi 0.7] TO [Spark 3.3.1, EMR 6.10.1, and Hudi 0.14]. This upgrade is leading to a 230% performance degradation. Previously, the jobs were running in 18 minutes, but now they are taking over an hour to complete. I'm sharing screenshots below for reference. There have been no code changes apart from upgrading the dependencies. I am writing to the Hudi table in version 0.14 the same way as I did in version 0.7. For this upgrade, I have created a new copy-on-write table and am using the Simple Indexing approach.

Could you please help me debug this issue or suggest any additional configurations that might be required to improve performance?

spark3

spark-2

KnightChess commented 1 month ago

hoodie.simple.index.parallelism can be modified to adjust the parallelism of stage 121, but it may cause the parallelism of stage 120 to decrease. You can give it a try.

bibhu107 commented 1 month ago

Hi @KnightChess Thanks for commenting. But my major doubt is why shuffle write is nearly doubled in hudi 0.14? And that is leading to issues in step 121

KnightChess commented 1 month ago

@bibhu107 if next parallelism is too bigger, shuffle data will grow. And the mayjor reason is task is too much because high, spark need to scheduler too mush task.

KnightChess commented 1 month ago

@bibhu107 and why shuffle data grow, I haven't looked at the code in detail; the following is just my guess. you have too much reducer, so the shuffle data may be need more meta. And on the other head, 0.7 -> 0.14, may be the shuffle's java object attr has change, this alse can cause diff. But I think parallelism is the major problem.

KnightChess commented 1 month ago

@bibhu107 does it can work for you? I misread the stack trace, so this parameter hasn't taken effect in the question stage. The parameter that needs to be set is a different one. If you are insert, try to set hoodie.insert.shuffle.parallelism, if upsert, set hoodie.upsert.shuffle.parallelism

soumilshah1995 commented 1 month ago

Slack Thread https://apache-hudi.slack.com/archives/C4D716NPQ/p1718122404452279

bibhu107 commented 1 month ago

Hello @KnightChess, thank you for your suggestions.

Initially, the Adaptive Query Execution (AQE) feature was disabled for the jobs because we were explicitly setting spark.sql.shuffle.partitions . Later, we enabled it using the following configuration:

spark.sql.adaptive.coalescePartitions.enabled=true
spark.sql.adaptive.skewJoin.enabled=true

Additionally, we removed the spark.sql.shuffle.partitions configuration. This change resulted in better job performance. However, we have not yet conducted any load/pressure testing.

We will share the results once we perform load testing. For now, we have moved back to Hudi 0.8 and using Spark3.3.1.

Thank you for raising the PR.

bibhu107 commented 1 month ago

I have one query: Why do we need this PR? I expected Hudi to automatically take the deduced parallelism from Hudi 0.13.

As mentioned in the documentation for hoodie.upsert.shuffle.parallelism, it states:

From version 0.13.0 onwards, Hudi by default automatically uses the parallelism deduced by Spark based on the source data.
KnightChess commented 1 month ago

@bibhu107 hi, this pr is target to imporve the deduced parallelism and more user friendly, in hudi side, AQE can not effort because use rdd directly

bibhu107 commented 3 weeks ago

I am closing this Issue. Thanks for support @KnightChess