ClickHouse / ClickHouse

ClickHouse® is a real-time analytics DBMS
https://clickhouse.com
Apache License 2.0
37.25k stars 6.86k forks source link

About Adding Replicas to a Cluster? #56400

Closed yangshike closed 11 months ago

yangshike commented 11 months ago

hello! I recently encountered data synchronization issues while adding replicas to the cluster

clickhouse version:23.7.1 ,database engine :Atomic ,Starting with a two replica cluster replica config:

1 true 192.168.1.181 9000 192.168.1.179 9000 Now 1 new node has been added : 192.168.2.140 9000 I copied the creation files of the database and table to the new node, There is no problem creating a new node database and table, but the data has not been synchronized。 I did not see any new node replica information for the table on Zookeeper either this is one of table : CREATE TABLE test.test_table ( corp_id String , user_id String, user_name String ) ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}') ORDER BY (corp_id, user_id,) SETTINGS index_granularity = 8192 I tried creating nodes in zookeeper /clickhouse/tables/910a53d9-8181-8a39-6f10-0e503bcfae22/01/replicas/01-01-3/flags/force_ restore_ data and sudo u clickhouse touch/var/lib/clickhouse/flags/force_ restore_ data But there is no group effect, what else do I need to do!!
evillique commented 11 months ago

I copied the creation files of the database and table to the new node

Did you create all the tables and databases by executing CREATE queries?

Is the macros section correctly filled on the new replica? Are there any errors in the logs?

yangshike commented 11 months ago

Can't you manually create a table? Is the manually created table uuid inconsistent ReplicatedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}',) error info: No metadata in ZooKeeper for /clickhouse/tables/2c404eb4-57c4-4c50-aaf0-1c600eb5ded5/01/replicas/01-01-3:

Because it is a newly added replica, Zookeeper does not have any secondary information. However, I have already copied the database and table to the new machine, and after restarting the clickhouse server, the database and table have been automatically created

If created Table Query, like the following, cannot synchronize data properly by manually creating a table on a new node,

ReplicatedReplacingMergeTree ('/clickhouse/tables/ {layer}- {shard}/test/test_table ',' {replica} ')

evillique commented 11 months ago

Can't you manually create a table?

No, you have to execute all the CREATE queries. If you only copy the .sql files ClickHouse will try to ATTACH these tables and will run into there being no metadata in ZooKeeper for these tables.

yangshike commented 11 months ago

The original table creation is like this

ENGINE=ReplicatedReplacingMergeTree ('/click house/tables/{uuid}/{shard}', '{replica}', gmt_modified)

Do I need to find the uuid for each table, modify the creation statement, and execute it one by one?

There are hundreds of watches, which is a huge workload

evillique commented 11 months ago

You can look into backups: https://clickhouse.com/docs/en/operations/backup with structure_only parameter enabled.

Or you could try to construct the create statements with the data from system.tables.

evillique commented 11 months ago

For the future, you can also look into Replicated database which solves exactly the problem you are having.

yangshike commented 11 months ago

in the system.tables the create_table_query about engine like this: ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', No specific uuid

evillique commented 11 months ago

Something like this:

SELECT concat('CREATE TABLE `', database, '`.`', table, '` UUID `', toString(uuid), '`', substr(create_table_query, length(database) + length(table) + 15), ';') from system.tables where database not in ['system', 'INFORMATION_SCHEMA', 'information_schema'] and engine != 'View' INTO OUTFILE 'tables.sql'`

But compared to backups it does not calculate dependencies, so backups are still the preferred option.

yangshike commented 11 months ago

ok! thanks very much