numberlabs-developers / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
0 stars 0 forks source link

[SUPPORT]Possible memory leak issue for org.apache.hadoop.hive.conf.HiveConf while using Flink into Hudi #28

Open numberlabs-developers opened 10 months ago

numberlabs-developers commented 10 months ago

Tips before filing an issue

Have you gone through our FAQs?

Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.

If you have triaged this as a bug, then file an issue directly.

Describe the problem you faced

I am using Flink SQL to ingest data from AWS Kinesis to Hudi on S3. I used AWS Glue catalog as Hive metastore. hive_sync.enable is set to true in SQL. The ingestion works well. However after running a few hours or days, the jobmanager failed with OutOfMemory. I checked the hdump and found org.apache.hadoop.hive.conf.HiveConf took 80.77% memory. It seems to be related to HiveSyncContext.

The suspect leak from Eclipse Memory Analyzer:

12 instances of "org.apache.hadoop.hive.conf.HiveConf", loaded by "sun.misc.Launcher$AppClassLoader @ 0xe400bdf8" occupy 338,544,384 (80.77%) bytes.

Biggest instances: •org.apache.hadoop.hive.conf.HiveConf @ 0xe71197b0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe72d9e30 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe77c62c0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe787f640 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe798fd00 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe7a9b0f0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe812a8c8 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe82d0af0 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe84c10c8 - 33,702,712 (8.04%) bytes. •org.apache.hadoop.hive.conf.HiveConf @ 0xe8736300 - 33,702,712 (8.04%) bytes.

Keywords sun.misc.Launcher$AppClassLoader @ 0xe400bdf8 org.apache.hadoop.hive.conf.HiveConf

To Reproduce

Steps to reproduce the behavior:

Set up an AWS EMR 6.10.0 with Flink 1.16.0 +Hive 3.1 + Hudi 0.13.0 Set up an AWS Kinesis and ingest data into it. Run a Flink SQL job to ingest to Hudi on S3 from Kinesis Run for a few hours or days, could get OOM. Expected behavior No OOM issue.

Environment Description

Hudi version : 0.13.0

Spark version : 3.3.1

Hive version : 3.1

Hadoop version : 3.3.3

Storage (HDFS/S3/GCS..) : S3

Running on Docker? (yes/no) : no

Additional context

Add any other context about the problem here.

Stacktrace

Add the stacktrace of the error.

2023-11-09 06:59:55,475 ERROR org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor executes action [commits the instant 20231109065505712] error java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.stream.StreamSupport.stream(StreamSupport.java:69) ~[?:1.8.0_392] at java.util.Collection.stream(Collection.java:581) ~[?:1.8.0_392] at org.apache.hudi.common.table.timeline.TimelineLayout$TimelineLayoutV1.lambda$filterHoodieInstants$2(TimelineLayout.java:68) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.TimelineLayout$TimelineLayoutV1$$Lambda$1187/1033743503.apply(Unknown Source) ~[?:?] at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_392] at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1652) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[?:1.8.0_392] at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_392] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_392] at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[?:1.8.0_392] at org.apache.hudi.common.table.HoodieTableMetaClient.scanHoodieInstantsFromFileSystem(HoodieTableMetaClient.java:651) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.scanHoodieInstantsFromFileSystem(HoodieTableMetaClient.java:625) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.(HoodieActiveTimeline.java:163) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.(HoodieActiveTimeline.java:155) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.(HoodieActiveTimeline.java:175) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.getActiveTimeline(HoodieTableMetaClient.java:352) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.(HoodieTableMetaClient.java:153) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.newMetaClient(HoodieTableMetaClient.java:689) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient.access$000(HoodieTableMetaClient.java:81) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.common.table.HoodieTableMetaClient$Builder.build(HoodieTableMetaClient.java:770) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.table.HoodieFlinkTable.create(HoodieFlinkTable.java:62) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkTableServiceClient.getHoodieTable(HoodieFlinkTableServiceClient.java:173) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkTableServiceClient.writeTableMetadata(HoodieFlinkTableServiceClient.java:179) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.writeTableMetadata(HoodieFlinkWriteClient.java:279) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:282) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:233) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:111) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:74) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:199) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doCommit(StreamWriteOperatorCoordinator.java:537) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0] at org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:513) ~[hudi-flink1.16-bundle-0.13.0.jar:0.13.0]

torvalds-dev-testbot[bot] commented 10 months ago

torvalds.dev is analyzing the ticket

torvalds-dev-testbot[bot] commented 10 months ago

The issue described is a suspected memory leak in the use of org.apache.hadoop.hive.conf.HiveConf related to HiveSyncContext when ingesting data from AWS Kinesis to Hudi on S3 using Flink SQL and AWS Glue catalog as Hive Metastore. Over some hours or days, the JobManager encounters an OutOfMemoryError.

Here is a plan of action:

  1. Confirm whether the issue is reproducible: For this, you need to replicate the environment described and follow the steps defined.

  2. From the code shared, you have been provided a list of methods related to the Hoodie version. Even though their association to the issue is not clear, you need to investigate these methods for any potential memory leaks related to the use of 'HiveConf' within them.

  3. Identify and resolve the memory leak: It might be necessary to analyze and understand the HiveConf instances created and perhaps not released for garbage collection within the system, causing the JobManager to run out of memory.

  4. Perform testing to confirm the memory leak issue had been fixed: This will involve running the program to see if it still runs out of memory after some time.

  5. Document the solution approach and findings to help others facing similar issues in the future.

Note: The Out of Memory problem is complex and might require extensive work to resolve. Profiling tools like Eclipse Memory Analyzer Tool (MAT) may be needed to analyze memory consumption.

Let's start from step 1: Replicating the environment to reproduce the error.

Please provide updates or outputs encountered as you proceed with each step, to guide us on the next actions.