Open TarunMootala opened 6 months ago
Same issue reported here in the past, which is still open for RCA https://github.com/apache/hudi/issues/7800
@TarunMootala Is it possible for you to upgrade Hudi version to 0.14.1 and check if you still see this issue. The other issue was related to loading of archival timeline in the sync which was fixed in later releases. https://github.com/apache/hudi/pull/7561
@ad1happy2go
Thanks for your inputs.
I don't think it was related to loading of archival timeline. When this error occurred, the first option I've tried is cleaning of archival timeline (.hoodie/archived/) and it didn't help. Only deleting (archive) few of the oldest Hudi metadata from Active timeline (.hoodie folder) and reducing hoodie.keep.max.commits
helped to resolve the issue.
@TarunMootala Can you check the size of the timeline files. Can you post the driver logs.
@ad1happy2go
.hoodie/
folder is 350 MB and it has 3435 files (this includes active and archival timelines)
.hoodie/archived/
is 327 MB and it has 695 files (only archival timelines)
Attached driver logs log-events-viewer-result.csv
@TarunMootala The size itself doesn't look so big. In the log I couldn't locate the error. Can you check once.
@ad1happy2go,
When AWS Glue encounters OOME it kills the JVM immediately. It could be reason for the error not being available in driver logs. However, the error is present in output logs which is same as given in overview.
@TarunMootala Can you share the timeline? Do you know how many file groups are there in the clean instant?
@ad1happy2go
Can you share the timeline?
Can you elaborate on this ?
Do you know how many file groups are there in the clean instant?
Are you referring to number of files in that particular cleaner run ?
I was having exactly the same issue - for me, it was related to running clean on a partitioned dataset - and a clean run wasn't incremental(loading all partitions). This can happen if you never enabled clean, or disabled for some time long enough the last clean savepoint commit to be archived. e.g. if this log shows a very high number
LOG.info("Total partitions to clean : " + partitionsToClean.size() + ", with policy " + config.getCleanerPolicy());
likely you see the same issue as me. Not sure it can happen with any Sync.
Related issue https://github.com/apache/hudi/issues/8199
Maybe the workaround can be set up - so that the archived timeline is loaded when a certain new flag is enabled(false by default) - this way it can be done, although expensive, but possible. Or clean can be revisited to split a number of partitions into reasonable numbers - but I'm sure it is impossible as cleaning 2M partitions will exhaust driver memory... Currently, it's simply not possible to run clean if for any reason it was disabled for some time so the last clean is archived.
Describe the problem you faced We have spark streaming job that reads data from an input stream and appends the data to a COW table partitioned on subject area. This streaming job has a batch internal of 120 seconds.
Intermittently the job is failing with error
To Reproduce
No specific steps.
Expected behavior
The job should commit the data successfully and continue with next micro batch.
Environment Description
Hudi version : 0.12.1 (Glue 4.0)
Spark version : Spark 3.3.0
Hive version : N/A
Hadoop version : N/A
Storage (HDFS/S3/GCS..) : S3
Running on Docker? (yes/no) : no
Additional context
We are not sure on the exact fix and root cause. However, the workaround (not ideal) is to manually delete (archive) few of the oldest Hudi metadata from Active timeline (
.hoodie
folder) and reducehoodie.keep.max.commits
. This is only working when we reduce max commits, and whenever the max commits are reduced it run perfectly for few months before failing again.Our requirement is to store 1500 commits to enable incremental query capability on last 2 days of changes. Initially we started with max commits of 1500 and gradually came down to 400.
Hudi Config
Stacktrace
Add the stacktrace of the error.
Debugged multiple failure logs, always failing at the stage
collect at HoodieSparkEngineContext.java:118 (CleanPlanActionExecutor)