Open RuyRoaV opened 5 months ago
@RuyRoaV Can you provide event logs or spark UI.
On configurations, I recommend not to use archive beyond save point. You can also try to use SIMPLE index once. As for some of the usecases where most of the file groups are updated, SIMPLE index perform much better.
Hello @ad1happy2go
I have attached some screenshots of the Spark UI. Is there any specific screen that you'd like to see?
Thanks for the input, will take that into account. I've also seen on some other GitHub issues, seen changing to and RLI index being recommended. Would that work for a COW table? or would the SIMPLE index still be a better approach?
Best regards,
@RuyRoaV RLI will work if you need global index. It works for COW table as well.
Hi Aditya
I have tried out your recommendation and found the following:
Using SIMPLE INDEX
The average execution time was reduced from 20 min to around 11 min, which is great. In the Spark UI screenshot, you can see that a big percentage of the execution time is taken by a countByKey at JavePairRDD
action in the SparkCommitUpsert
executor, especially during the SuffleWrite
part.
We are in a need to reduce the job runtime even more, is there any other recommendation regarding the different configurations that we can set?
We may try deactivating of the archival beyond the savepoint a bit later. But I am curious about why would that help us improve in performance?
Using RECORD LEVEL
I replaced the index for a table, for which its upsert Glue job was already running in under 5 minutes. Overall, the job runtime has remained the same, being count at HoodieSparkSqlWriter.scala:1072
during the SparkCommitUpsert
, especially during the execution. This is similar as in the case presented when submitting this ticket.
I'll try with one of our long running jobs and will let you know the outcome.
By the way is there a way to check the index type of a table?
Thanks
Best regards
I think the table index type is set with "hoodie.index.type=" when writing. I don't think it is set on the table property level.
BTW, Can you share the number of records ( and size ) in a batch?
@RuyRoaV Do you still see any more performance issues. Let us know. Yo check index you can see details of the running jobs in spark UI.
Hi @ad1happy2go
We are still seeing performance issues. Right now we are trying to see which combination of parameters might help. But we are a bit lost in which parameters we need to tweak.
To give a bit more of context of how our table will be upserted:
The bottle neck in our Glue job is this task
Doing partition and writing data: d_citadel_shipment_attributes_eu_v1 count at HoodieSparkSqlWriter.scala:1050
where some of our executors will be stuck for ~15 min, whereas other executors will finish their task in ~2 min
You can find here the logs of one of our executors: Executor1M.csv
Th bottleneck is task 63, which starts at 15:11:51 and finishes at 15:28:57. What I have seen in the logs is that the MergeHandle for partitionPath sometimes takes around 1.5 min and sometimes it takes about 16 minutes. This is coming from the message, this comes from this message:
INFO HoodieMergeHandle: MergeHandle for partitionPath cyear=2024/cmonth=9/cday=9 fileID b0c88928-5427-4f91-bd1b-5b596be35666-0, took 1018758 ms.
Would you be able to shed some light in why this could be happening? and how can we optimize the data writing?
Thanks.
Best regards.
Describe the problem you faced
We have a Glue 4.0 job to perform an upsert on a Hudi managed COW table. In some occasions, the Glue job runs in under 5 minutes, whereas in others it runs for up to 20 minutes. Moreover, we have noticed that, in those instances, the job is performing a count at
HoodieSparkSqlWriter.scala:1072
action for over 17 minutes; in other job runs this only takes around 1 minute.Regarding some specifications for the table:
We have 3 partition fields:
A precombine field:
and 3 recordkey fields:
You can see more about the table description here:
We are also using a BLOOM type index and these are some other configurations that we are setting.
Could you please advise us on which actions we should take to bring down the execution time?
Expected behavior
We would like to understand why we are looking this variation in the execution times and advice on the actions needed
to prevent this behaviour.
Environment Description
Glue version: 4
Worker Type: G.2x
Hudi version : 0.14.1
Spark version : 3.3
Max DPU Capacity: 120
Storage (HDFS/S3/GCS..) : S3
Running on Docker? (yes/no) : No