chdb-io / chdb

chDB is an in-process OLAP SQL Engine 🚀 powered by ClickHouse
https://clickhouse.com/docs/en/chdb
Apache License 2.0
2.03k stars 72 forks source link

Cleaning up stale parts #107

Open ruslandoga opened 1 year ago

ruslandoga commented 1 year ago

👋

Just wanted to quickly check if chdb automatically cleans stale MergeTree parts after optimize table commands. And if so, how does it work?

auxten commented 1 year ago

No, we haven't do this. To be fixed

ruslandoga commented 1 year ago

In that case this chat with GPT that explains the naming might be useful: https://chat.openai.com/share/94a7fa2d-5f73-4f9f-b2ba-681ed324ae35

ruslandoga commented 1 year ago

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

auxten commented 1 year ago

There should be some code in clickhouse doing that in background thread pool. We just need to find it out, run it immediately on Optimize. Any interest to make a patch? @ruslandoga

ruslandoga commented 1 year ago

Yes, I'm interested and will try doing it today! :)

ruslandoga commented 1 year ago

The logic seems to be in MergeTreeData::clearOldPartsFromFilesystem but I'm a bit stuck at compiling ClickHouse, it's already taking over four hours and I'm at [7671/8349] Building CXX object src/CMakeFiles/dbms.dir/Interpreters/DatabaseAndTableWithAlias.cpp.o 😅

I'm new to C++ so I wonder if there is a faster way to build ClickHouse / chdb just for tests?

lmangani commented 1 year ago

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

ruslandoga commented 1 year ago

I tried to find your discord server but failed :)

auxten commented 1 year ago

I tried to find your discord server but failed :)

Imangani == qxip

ruslandoga commented 1 year ago

Thank you @auxten, but google doesn't return anything for "qxip discord" either.

auxten commented 1 year ago

Here are we: https://discord.com/channels/1098133460310294528/1125668654965604422

ruslandoga commented 1 year ago

It doesn't seem like this link leads me anywhere :) I think it might be specific to your user account. Or maybe I need to be invited first.

lmangani commented 1 year ago

Hello @ruslandoga the link and invite to our discord is on the chdb readme in the contact section. Once you join you'll find us all at once on the chdb channels! Looking forward to you joining!

ruslandoga commented 11 months ago

A small note on the suggested implementation:

We just need to find it out, run it immediately on Optimize.

It seems like ClickHouse is cleaning old parts after ~eight minutes since they become inactive (i.e. stopped being referenced). Apparently, it's done this way to make sure the dirty pages have been fsynced (assuming dirty_writeback_centisecs = 5 minutes) and that in case of a crash (before fsync) the data could be restored. So I guess the custom implementation would need to call fsync on the new parts and only then clean the old parts.

I also found in the docs that the old parts can be deleted with SQL, so technically, what I initially wanted could be achieved like this:

INSERT INTO events FORMAT RowBinary <...rowbinary...>;

-- in the background process, every few minutes
SELECT * FROM system.parts WHERE table = 'events'; -- check how many parts there are, if too many, run OPTIMIZE
OPTIMIZE TABLE events;
-- also if there are inactive parts older than 8 minutes, drop them
ALTER TABLE events DROP PART '<part_id goes here>';
auxten commented 11 months ago

@ruslandoga Nice, I think the 8 mins thing is reasonable in ClickHouse. But not in an embedded database. I would patch chdb to run something to do the cleanup automatically.

devcrafter commented 11 months ago

There are MergeTree settings which control usage of fsync(). Please check min_rows_to_fsync_after_merge and/or min_compressed_bytes_to_fsync_after_merge.

The relevant code is here

devcrafter commented 11 months ago

Hey @ruslandoga I'm afraid the first build is as painful as you're experiencing (our action takes > 5h to build) but if you add ccache to the mix the next compile and link rounds with minor modifications will be much faster. If you're on our discord feel free to ping on the dev channel and we'll try to assist.

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

lmangani commented 11 months ago

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

devcrafter commented 11 months ago

There is also documentation on how to build ClickHouse - https://clickhouse.com/docs/en/development/build

I don't see any related issues with the build process and we have chdb-builder helper

Had no idea you have it. Just read your comment and thought that this should be covered by build documentation. Just tried to help

poundifdef commented 5 months ago

Seems like we can find the part with the highest LEVEL, check its MAX_BLOCK_NUMBER, and delete all parts with MAX_BLOCK_NUMBER less than that.

Would this be safe to do?