Open noahtaite opened 5 months ago
briefly chatted in office hour: this is likely caused by clean planning loading archived commits which is about 500 mb each on storage, since the clean never run before. the active timeline only has about 70-80 commits.
Is this related? https://stackoverflow.com/questions/53462161/java-lang-outofmemoryerror-when-plenty-of-memory-left-94gb-200gb-xmx
I've tried changing my offHeap memory, memory overhead, and instance size but the job driver to OOM at 12GB of memory usage.
can we try settting hoodie.cleaner.commits.retained to say 1000. and then get few data cleaned up. and then try 500. and after few runs, 300. and after few runs, 100.
essentially reducing the amount of data to clean on every batch. looks like we are trying to clean pretty much most of past history in 1 shot.
Describe the problem you faced
We've had a Hudi pipeline running for about a year without cleaner enabled for ~150 tables. After enabling cleaning, all but one of my tables ran the cleaning operation successfully, but this table fails consistently with an OutOfMemory error when serializing the cleaning plan.
Table dimensions in storage:
Async cleaner job:
spark-submit --master yarn --deploy-mode cluster --class org.apache.hudi.utilities.HoodieCleaner --jars /usr/lib/hudi/hudi-utilities-bundle.jar,/usr/lib/hudi/hudi-spark-bundle.jar /usr/lib/hudi/hudi-utilities-bundle.jar --target-base-path s3://bucket/table.all_hudi/ --hoodie-conf hoodie.cleaner.policy=KEEP_LATEST_COMMITS --hoodie-conf hoodie.cleaner.commits.retained=30 --hoodie-conf hoodie.cleaner.parallelism=640 --hoodie-conf hoodie.keep.min.commits=40 --hoodie-conf hoodie.keep.max.commits=50 --spark-master yarn
Cluster configs:
Spark History Server:
After this completes, the job almost immediately fails, with the stacktrace below being logged to the driver.
Ganglia shows my nodes being under-utilized, with memory maxing out around 1/4 of the total allocated memory:
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Clean, or at least use all the driver memory before OOMing
Environment Description
Hudi version : 0.13.1-amzn-0
Spark version : 3.4.0
Hive version : 3.1.3
Hadoop version : 3.3.3
Storage (HDFS/S3/GCS..) : S3
Running on Docker? (yes/no) : no
Additional context
Larger tables with more partitions were able to generate the cleaning plan fine, which we thought was strange.
We also tried reducing the size of the plan by retaining more commits (60 retained) but still received the same error.
Note that I also tried running cleaner synchronously with my ingestion job but also received driver OOM errors.
Stacktrace
Seems the error is happening at org.apache.hudi.common.table.timeline.TimelineMetadataUtils.serializeCleanerPlan(TimelineMetadataUtils.java:114) ~[app.jar:0.13.1-amzn-0]
Looking for assistance in properly configuring the memory settings for this. Thanks so much!