apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
5.44k stars 2.42k forks source link

[SUPPORT] Should we introduce extensible simple bucket index ? #12210

Open TheR1sing3un opened 1 week ago

TheR1sing3un commented 1 week ago

Currently we have two types of buck engines:

When user build a table with simple bucket index. The amount of data in each bucket can get larger over time, affecting the performance of compaction. However, users have no way to adjust the number of buckets in real time except to delete the table and rebuild. So do we need to provide a way to dynamically adjust the bucket count? Bucket resizing should not block reading and writing. We can use clustering to adjust the number of buckets, and we can use a dual write for new writes during resizing execution to ensure data consistency.

Why don't we use Consistent-Hash bucket engine?

So should we introduce extensible simple bucket? Welcome to the discussion!

Tips before filing an issue

Describe the problem you faced

A clear and concise description of the problem.

To Reproduce

Steps to reproduce the behavior:

1. 2. 3. 4.

Expected behavior

A clear and concise description of what you expected to happen.

Environment Description

Additional context

Add any other context about the problem here.

Stacktrace

Add the stacktrace of the error.

TheR1sing3un commented 1 week ago

@danny0405 Hi ! I have some idea about extensible simple bucket. Looking forward to your reply!

danny0405 commented 1 week ago

The number of buckets can only be increased or decreased one by one

I don't think this is a real problem

The number of buckets can only be increased or decreased one by one

Bucket join relies on engine specific naming of the files, which are very probably different with Hudi style, I don't think it is easy to be generalized.

TheR1sing3un commented 1 week ago

Bucket join relies on engine specific naming of the files, which are very probably different with Hudi style, I don't think it is easy to be generalized.

IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value. The consistent hash bucket index cannot do this, because there is no fixed rule for the number of buckets and hash mapping relationship.

danny0405 commented 1 week ago

IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value.

This is very engine specific because the "bucket" definition is varigated among different engines.

TheR1sing3un commented 1 week ago

IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value.

This is very engine specific because the "bucket" definition is varigated among different engines.

Got it~

danny0405 commented 1 week ago

Also the main gains for consistent-hasing is to try to rewrite as less data files for re-hashing. Otherwise, you have to rewrie all the existing data set(the whole table) which is a cost that unaccepted for many cases.

TheR1sing3un commented 1 week ago

Also the main gains for consistent-hasing is to try to rewrite as less data files for re-hashing. Otherwise, you have to rewrie all the existing data set(the whole table) which is a cost that unaccepted for many cases.

So how can people who are using simple-bucket today deal with the increasing amount of data in their buckets? At present, it seems that only the method of deleting the table reconstruction can be solved, but the cost is relatively high, if we can support dynamically adjusting number of buckets through clustering for each partition, will it be more appropriate?

danny0405 commented 1 week ago

Another idea is to implement a light-weight index for each bucket, kind of a bit set of the record key hash codes.