Open TheR1sing3un opened 1 week ago
@danny0405 Hi ! I have some idea about extensible simple bucket. Looking forward to your reply!
The number of buckets can only be increased or decreased one by one
I don't think this is a real problem
The number of buckets can only be increased or decreased one by one
Bucket join relies on engine specific naming of the files, which are very probably different with Hudi style, I don't think it is easy to be generalized.
Bucket join relies on engine specific naming of the files, which are very probably different with Hudi style, I don't think it is easy to be generalized.
IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value. The consistent hash bucket index cannot do this, because there is no fixed rule for the number of buckets and hash mapping relationship.
IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value.
This is very engine specific because the "bucket" definition is varigated among different engines.
IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value.
This is very engine specific because the "bucket" definition is varigated among different engines.
Got it~
Also the main gains for consistent-hasing is to try to rewrite as less data files for re-hashing. Otherwise, you have to rewrie all the existing data set(the whole table) which is a cost that unaccepted for many cases.
Also the main gains for consistent-hasing is to try to rewrite as less data files for re-hashing. Otherwise, you have to rewrie all the existing data set(the whole table) which is a cost that unaccepted for many cases.
So how can people who are using simple-bucket today deal with the increasing amount of data in their buckets? At present, it seems that only the method of deleting the table reconstruction can be solved, but the cost is relatively high, if we can support dynamically adjusting number of buckets through clustering for each partition, will it be more appropriate?
Another idea is to implement a light-weight index for each bucket, kind of a bit set of the record key hash codes.
Currently we have two types of buck engines:
When user build a table with simple bucket index. The amount of data in each bucket can get larger over time, affecting the performance of compaction. However, users have no way to adjust the number of buckets in real time except to delete the table and rebuild. So do we need to provide a way to dynamically adjust the bucket count? Bucket resizing should not block reading and writing. We can use clustering to adjust the number of buckets, and we can use a dual write for new writes during resizing execution to ensure data consistency.
Why don't we use Consistent-Hash bucket engine?
So should we introduce extensible simple bucket? Welcome to the discussion!
Tips before filing an issue
Have you gone through our FAQs?
Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
A clear and concise description of the problem.
To Reproduce
Steps to reproduce the behavior:
1. 2. 3. 4.
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
Hudi version :
Spark version :
Hive version :
Hadoop version :
Storage (HDFS/S3/GCS..) :
Running on Docker? (yes/no) :
Additional context
Add any other context about the problem here.
Stacktrace
Add the stacktrace of the error.