Open coffee34 opened 1 year ago
Based on the information provided, it appears that there were 106 updates at L21, but it became 54 updates and 52 inserts after compaction at L24. The related code for this behavior can be found at https://github.com/apache/hudi/blob/162dc18fc6a1e1d0db420a4735bc8c5a0ba7cf12/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieMergeHandle.java#L265 It seems that the developers are already aware that updates can sometimes be turned into inserts. Can you please provide more information on the specific conditions that could cause this behavior to occur?
I am aware that version 0.7.0 is outdated and that we are planning to use version 0.11.1 for our new pipeline. However, I am concerned that the same issue might occur as this part of code seems to still be present in the new version.
Yeah, we found another data loss issue: https://github.com/apache/hudi/pull/8079, let's wait for the release for 0.13.1
I am not sure if this was related to https://github.com/apache/hudi/pull/8079 I am trying to analyze all details. will update if I have any findings.
I can't seem to find any reason why this could happen. but don't think 8079 is the issue. that would surface differently. anyways, 0.7.0 is very old. We have come a long way from 0.7.0. We have fixed issues w/ compaction and spark cache invalidation For eg https://github.com/apache/hudi/pull/4753 https://github.com/apache/hudi/pull/4856 but I could not exactly say if these are the issues. But from the timeline provided, I could not reason about why parquet size might reduce. infact number of records reduced by 50% from previous version, even though log file modified just 100 ish records :(
Do you know if there could be any unintentional multi-writer interplay?
Thanks for reply. Currently, we have only one writer running, and it has been running without any errors for over half a year. However, I have set up a monitoring system to detect if this issue occurs again, and I am trying to find a way to reproduce it.
got it. thanks. since this is w/ 0.7.0, I am not sure if we had fixed anything on this already. is it feasible to upgrade to 012.2 or 0.13.0 ? you will definitely get all new features, perf etc.
@coffee34 Have you tried upgrading the Hudi version, Are you still facing this issue?
@coffee34 Can you please try upgrading and if you still see data loss, can you share the write configurations and timeline?
hey @coffee34 : can you help us w/ any more info on this end. we are taking a serious look into all data consistency issues. So, interested in getting to the bottom of it.
hey @coffee34 : The code that you pointed out where inserts converted to updates is not applicable to bloom and simple index. Those are meant for few other index types. But for these two indexes, inserts will always go into a new base file and only update can go into log files.
From the info provided, we could not get any leads and we don't see any other data loss reports from the community. We do have some reports around global index, but those will surface differently.
What you are reporting is completely diff. Can you provide us w/ a full backup of ".hoodie" and write configs. but as of now, we can't think of any reason why such huge data loss can occur.
unless you changed the record key / partition path config for instance. Do you happen to know any changes to your pipelines around the time where you see data loss. like any config updates. or anything of those sort. If you have a back up of the file slice in question, we can try to copy them locally(on your end) and trigger compaction and debug whats going on.
if none of these are feasible, I am afraid we can find the root cause.
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
Hudi version : 0.7.0
Spark version :2.4.4
Hive version :
Hadoop version :
Storage (HDFS/S3/GCS..) : S3
Running on Docker? (yes/no) : no
Additional [context]