apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
5.2k stars 2.38k forks source link

hoodie.properties.backup file does not exist #11510

Open donghaihu opened 1 week ago

donghaihu commented 1 week ago

Env: Hudi:0.14 Flink:1.6 CHD:6.32 HDFS:3.0.0 Hive:2.1.1 Action: Search hudi table We are currently using version Hudi 0.14, which was upgraded from 0.13. We did not encounter this issue in version 0.13. Specific Issue: I have a streaming task table that runs in production without making any changes to the table or the task, and the following exception is reported:

2024-06-25 01:28:10,582 WARN org.apache.hudi.common.table.HoodieTableConfig [] - Invalid properties file hdfs://10.0.5.131:8020/user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties: {}

2024-06-25 01:28:10,586 WARN org.apache.hudi.common.table.HoodieTableConfig [] - Could not read properties from hdfs://10.0.5.131:8020/user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties.backup: java.io.FileNotFoundException: File does not exist: /user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties.backup

The current streaming task deployment model is Session, and this issue occurs occasionally.

I have the following initial doubts: 1.Without involving any changes to table structure, index, partition, etc., why is the hoodie.properties file cleared? 2.Why was the corresponding hoodie.properties.backup file not generated before the hoodie.properties operation?

It's not just a single table experiencing this issue; currently, we have nearly 30 tables in production with similar problems. This issue frequently occurs in our development environment as well. We have not yet identified the specific cause of this problem.

Thanks!

danny0405 commented 1 week ago

Maybe this is what you need: https://github.com/apache/hudi/pull/8609, but you are right, it looks like the table.properties has been updated frequently after the upgrade, can you add some log in HoodieTableConfig and let's see why the update is triggered.

danny0405 commented 1 week ago

Are these failing jobs uses separate compaction with Spark, or do they have concurernt write with Spark writers?

danny0405 commented 1 week ago

Spark enables the MDT while Flink does not, maybe that's the reason why the table properties are updated frequently.

donghaihu commented 1 week ago

Flink for writing.

donghaihu commented 1 week ago

enables

How can we configure it to avoid this issue?

danny0405 commented 1 week ago

I mean did you have both Spark and Flink job writing into the same table, if it is, you might need to disable the MDT on Spark writer.

donghaihu commented 1 week ago

I mean did you have both Spark and Flink job writing into the same table, if it is, you might need to disable the MDT on Spark writer.

Oh. We don't have that scenario. Each table only has one fink task responsible for writing; there is no situation where multiple sinks correspond to one table.

donghaihu commented 1 week ago

@danny0405 :I found that this situation tends to occur after Session tasks report errors.