Closed xuzifu666 closed 4 months ago
@danny0405
Does DefaultHoodieRecordPayload
wotk here?
DefaultHoodieRecordPayload can work well,but hudi default payload is OverwriteWithLatestAvroPayload,from communication master had changed it. Thanks @danny0405
@xuzifu666 OverwriteWithLatestAvroPayload is supposed to work like this only.
DefaultHoodieRecordPayload also honours the preCombine value while merging an incoming record with that’s in storage, while OverwriteWithLatestAvroPayload will blindly choose the incoming over anything that’s in storage.
@ad1happy2go @danny0405 All had been resolved,Thanks and close the issue
Tips before filing an issue
Have you gone through our FAQs?
Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
A clear and concise description of the problem.
To Reproduce
Steps to reproduce the behavior:
1. test("Test table type name first merge test") { withRecordType()(withTempDir { tmp => val targetTable = generateTableName val tablePath = s"${tmp.getCanonicalPath}/$targetTable".replaceAll("\\", "\/") spark.sql( s""" |create table ${targetTable} ( |
id
string, |name
string, |ts
bigint, |day
STRING, |hour
INT |) using hudi |tblproperties ( | 'primaryKey' = 'id', | 'type' = 'mor', | 'preCombineField'='ts', | 'hoodie.index.type' = 'BUCKET', | 'hoodie.bucket.index.hash.field' = 'id', | 'hoodie.bucket.index.num.buckets'=512 | ) partitioned by (day
,hour
) location '${tablePath}' """.stripMargin)}) }
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
Hudi version : 0.14.0
Spark version : 3.2
Hive version : 1.1.0
Hadoop version :3.2.0
Storage (HDFS/S3/GCS..) : HDFS
Running on Docker? (yes/no) :
Additional context
Add any other context about the problem here.
Stacktrace
Add the stacktrace of the error.