apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
5.34k stars 2.42k forks source link

[SUPPORT] data loss in new base file after compaction #8132

Open coffee34 opened 1 year ago

coffee34 commented 1 year ago
Describe the problem you faced I encountered data loss in hudi mor table. After compaction, a base file became smaller and lost data. The issue occurred on 2023-01-10 and I am only able to access archived commits.(The parquet and log files for those instants have already been deleted.) I have uploaded all the related commits archived_commit.csv. Here is the timeline of the basefile: 8fb29db6-81da-4455-a08d-ba3e7ee36856-0, it shows that its size was 116M until L23. However, it became 66M after compaction and lost 890k records. lineNum commitTime actionType actionState Plan
L16 20230110133240 compaction REQUESTED {"baseInstantTime": "20230110130744", "deltaFilePaths": [".8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20230110130744.log.1_53-7076606-255800368"], "dataFilePath": "8fb29db6-81da-4455-a08d-ba3e7ee36856-0_24-7076286-255788001_20230110130744.parquet", "fileId": "8fb29db6-81da-4455-a08d-ba3e7ee36856-0", "partitionPath": "daas_date=2022", "metrics": {"TOTAL_LOG_FILES": 1.0, "TOTAL_IO_READ_MB": 116.0, "TOTAL_LOG_FILES_SIZE": 24243.0, "TOTAL_IO_WRITE_MB": 116.0, "TOTAL_IO_MB": 232.0}, "bootstrapFilePath": null}
L17 20230110133240 compaction INFLIGHT {"baseInstantTime": "20230110130744", "deltaFilePaths": [".8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20230110130744.log.1_53-7076606-255800368"], "dataFilePath": "8fb29db6-81da-4455-a08d-ba3e7ee36856-0_24-7076286-255788001_20230110130744.parquet", "fileId": "8fb29db6-81da-4455-a08d-ba3e7ee36856-0", "partitionPath": "daas_date=2022", "metrics": {"TOTAL_LOG_FILES": 1.0, "TOTAL_IO_READ_MB": 116.0, "TOTAL_LOG_FILES_SIZE": 24243.0, "TOTAL_IO_WRITE_MB": 116.0, "TOTAL_IO_MB": 232.0}
L18 20230110133240 commit COMPLETED {\"fileId\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0\", \"path\": \"daas_date=2022/8fb29db6-81da-4455-a08d-ba3e7ee36856-0_27-7076664-255801644_20230110133240.parquet\", \"prevCommit\": \"20230110130744\", \"numWrites\": 2060123, \"numDeletes\": 0, \"numUpdateWrites\": 99, \"totalWriteBytes\": 122155577, \"totalWriteErrors\": 0, \"partitionPath\": \"daas_date=2022\", \"totalLogRecords\": 99, \"totalLogFiles\": null, \"totalUpdatedRecordsCompacted\": 99, \"numInserts\": 0, \"totalLogBlocks\": 1, \"totalCorruptLogBlock\": 0, \"totalRollbackBlocks\": 0, \"fileSizeInBytes\": 122155577}
L19 20230110135603 commit REQUESTED
L20 20230110135603 commit INFLIGHT {\"fileId\": \"3c82ba35-b701-4f34-882a-167146036ab3-0\", \"path\": null, \"prevCommit\": \"20230110133240\", \"numWrites\": 0, \"numDeletes\": 0, \"numUpdateWrites\": 108, \"totalWriteBytes\": 0, \"totalWriteErrors\": 0, \"partitionPath\": null, \"totalLogRecords\": 0, \"totalLogFiles\": null, \"totalUpdatedRecordsCompacted\": 0, \"numInserts\": 0, \"totalLogBlocks\": 0, \"totalCorruptLogBlock\": 0, \"totalRollbackBlocks\": 0, \"fileSizeInBytes\": 0}
L21 20230110135603 commit COMPLETED {\"fileId\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0\", \"path\": \"daas_date=2022/.8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20230110133240.log.1_53-7077163-255815146\", \"prevCommit\": \"20230110133240\", \"numWrites\": 106, \"numDeletes\": 0, \"numUpdateWrites\": 106, \"totalWriteBytes\": 25777, \"totalWriteErrors\": 0, \"partitionPath\": \"daas_date=2022\", \"totalLogRecords\": 0, \"totalLogFiles\": null, \"totalUpdatedRecordsCompacted\": 0, \"numInserts\": 0, \"totalLogBlocks\": 0, \"totalCorruptLogBlock\": 0, \"totalRollbackBlocks\": 0, \"fileSizeInBytes\": 25777}
L22 20230110135817 compaction REQUESTED {\"baseInstantTime\": \"20230110133240\", \"deltaFilePaths\": [\".8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20230110133240.log.1_53-7077163-255815146\"], \"dataFilePath\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0_27-7076664-255801644_20230110133240.parquet\", \"fileId\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0\", \"partitionPath\": \"daas_date=2022\", \"metrics\": {\"TOTAL_LOG_FILES\": 1.0, \"TOTAL_IO_READ_MB\": 116.0, \"TOTAL_LOG_FILES_SIZE\": 25777.0, \"TOTAL_IO_WRITE_MB\": 116.0, \"TOTAL_IO_MB\": 232.0}, \"bootstrapFilePath\": null}
L23 20230110135817 compaction INFLIGHT {\"baseInstantTime\": \"20230110133240\", \"deltaFilePaths\": [\".8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20230110133240.log.1_53-7077163-255815146\"], \"dataFilePath\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0_27-7076664-255801644_20230110133240.parquet\", \"fileId\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0\", \"partitionPath\": \"daas_date=2022\", \"metrics\": {\"TOTAL_LOG_FILES\": 1.0, \"TOTAL_IO_READ_MB\": 116.0, \"TOTAL_LOG_FILES_SIZE\": 25777.0, \"TOTAL_IO_WRITE_MB\": 116.0, \"TOTAL_IO_MB\": 232.0}, \"bootstrapFilePath\": null}
L24 20230110135817 commit COMPLETED {\"fileId\": \"8fb29db6-81da-4455-a08d-ba3e7ee36856-0\", \"path\": \"daas_date=2022/8fb29db6-81da-4455-a08d-ba3e7ee36856-0_20-7077198-255819567_20230110135817.parquet\", \"prevCommit\": \"20230110133240\", \"numWrites\": 1169659, \"numDeletes\": 0, \"numUpdateWrites\": 54, \"totalWriteBytes\": 69228459, \"totalWriteErrors\": 0, \"partitionPath\": \"daas_date=2022\", \"totalLogRecords\": 106, \"totalLogFiles\": null, \"totalUpdatedRecordsCompacted\": 106, \"numInserts\": 52, \"totalLogBlocks\": 1, \"totalCorruptLogBlock\": 0, \"totalRollbackBlocks\": 0, \"fileSizeInBytes\": 69228459}

Expected behavior

A clear and concise description of what you expected to happen.

Environment Description

Additional [context]

  1. We have set the parameter 'hoodie.compact.inline.max.delta.commits' to '1', which ensures that compaction runs after each delta commit.
  2. We are using default storage configurations such as a limitFileSize of 120M and a parquetBlockSize of 120M.
  3. There were no rollbacks or deletes between these commits.
  4. This issue has also occurred with other basefiles such as b244c458-61cb-4535-9f4d-47c64d0cb169-0, 9241d114-332c-4336-bfd3-7a5345a87159-0, 724428c7-a600-41a0-887f-feb491fe8c69-0 ...
  5. One clue is that seems this issue occurred on basefiles when their size exceeded 100M.
coffee34 commented 1 year ago

Based on the information provided, it appears that there were 106 updates at L21, but it became 54 updates and 52 inserts after compaction at L24. The related code for this behavior can be found at https://github.com/apache/hudi/blob/162dc18fc6a1e1d0db420a4735bc8c5a0ba7cf12/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergeHandle.java#L265 It seems that the developers are already aware that updates can sometimes be turned into inserts. Can you please provide more information on the specific conditions that could cause this behavior to occur?

I am aware that version 0.7.0 is outdated and that we are planning to use version 0.11.1 for our new pipeline. However, I am concerned that the same issue might occur as this part of code seems to still be present in the new version.

danny0405 commented 1 year ago

Yeah, we found another data loss issue: https://github.com/apache/hudi/pull/8079, let's wait for the release for 0.13.1

nsivabalan commented 1 year ago

I am not sure if this was related to https://github.com/apache/hudi/pull/8079 I am trying to analyze all details. will update if I have any findings.

nsivabalan commented 1 year ago

I can't seem to find any reason why this could happen. but don't think 8079 is the issue. that would surface differently. anyways, 0.7.0 is very old. We have come a long way from 0.7.0. We have fixed issues w/ compaction and spark cache invalidation For eg https://github.com/apache/hudi/pull/4753 https://github.com/apache/hudi/pull/4856 but I could not exactly say if these are the issues. But from the timeline provided, I could not reason about why parquet size might reduce. infact number of records reduced by 50% from previous version, even though log file modified just 100 ish records :(

nsivabalan commented 1 year ago

Do you know if there could be any unintentional multi-writer interplay?

coffee34 commented 1 year ago

Thanks for reply. Currently, we have only one writer running, and it has been running without any errors for over half a year. However, I have set up a monitoring system to detect if this issue occurs again, and I am trying to find a way to reproduce it.

nsivabalan commented 1 year ago

got it. thanks. since this is w/ 0.7.0, I am not sure if we had fixed anything on this already. is it feasible to upgrade to 012.2 or 0.13.0 ? you will definitely get all new features, perf etc.

ad1happy2go commented 1 year ago

@coffee34 Have you tried upgrading the Hudi version, Are you still facing this issue?

codope commented 1 year ago

@coffee34 Can you please try upgrading and if you still see data loss, can you share the write configurations and timeline?

nsivabalan commented 1 year ago

hey @coffee34 : can you help us w/ any more info on this end. we are taking a serious look into all data consistency issues. So, interested in getting to the bottom of it.

nsivabalan commented 1 year ago

hey @coffee34 : The code that you pointed out where inserts converted to updates is not applicable to bloom and simple index. Those are meant for few other index types. But for these two indexes, inserts will always go into a new base file and only update can go into log files.

From the info provided, we could not get any leads and we don't see any other data loss reports from the community. We do have some reports around global index, but those will surface differently.

What you are reporting is completely diff. Can you provide us w/ a full backup of ".hoodie" and write configs. but as of now, we can't think of any reason why such huge data loss can occur.

unless you changed the record key / partition path config for instance. Do you happen to know any changes to your pipelines around the time where you see data loss. like any config updates. or anything of those sort. If you have a back up of the file slice in question, we can try to copy them locally(on your end) and trigger compaction and debug whats going on.

if none of these are feasible, I am afraid we can find the root cause.