Closed arikodelabs closed 2 years ago
` Query id: e56b9025-be36-4507-a198-7d37eaa06ca8
┌─name───────────────┐ │ INFORMATION_SCHEMA │ │ bytebase │ │ db1 │ │ default │ │ information_schema │ │ system │ └────────────────────┘`
I made the logs by this command
clickhouse-backup restore -t bms_qa_democlient.point_time_series ariberisha13 > newtest.log
this time i keep incrementing ariberisha each time so that im sure im not messing with databases.
Backing up the whole database and restoring works just fine!
clickhouse-backup restore -t bms_qa_democlient.point_time_series ariberisha13 > newtest.log
do you mean restore_remote?
just restore
command can't get operation=download
in logs
Sorry forgot to include it, yes i did use restore_remote
clickhouse-backup restore_remote --rm ariberisha13
do you have only one clickhouse-server? or multiple servers?
Multiple Servers But some I have purely for testing
Did you run clickhouse-backup create_remote -t bms_qa_democlient.point_time_series ariberisha6
on multiple servers at the same time?
No, Thats why you see ariberisha6 and ariberisha13 because i'm not mixing them up and avoiding human errors
What I did was use create remote on the server with point_time_series table and use restore_remote on the destination server. Full backup works just fine!
according to shared logs
ariberisha6
created OK, upload OK, download OK, but fail to restore, are you sure you have
and ariberisha13
restored successfully
Please share
clickhouse-backup print-config
Could look to ariberisha6/metadata/bms_qa_democlient/point_time_series.json
on your remote storage
and try to find
202205_1_534_376
in parts section
which disk name used for this part?
After it try to find
ariberisha6/shadow/bms_qa_democlient/point_time_series/<disk_name>_202205_1_534_376.<your_archive_extension>
?
Moreover, you share strange database lists here https://github.com/AlexAkulov/clickhouse-backup/issues/446#issuecomment-1134631593
But I saw in shared logs multiple CREATE DATABASE IF NOT EXISTS
execution, did you remove databases from backup?
I will create ariberisha14 and share the details with you in a moment
Did you delete ariberisha6
?
Do you have the same path
field in your /etc/clickhouse-server/config.yml on multiple servers?
Hello.
On the Server i create table backup (Clickhouse-1) i used this command :
clickhouse-backup create_remote -t bms_qa_democlient.point_time_series ariberisha14
Source Table Backup Clickhouse-Server1.txt
Original Version of point_time_series.json
point_time_series.json_original.txt
After the command was done, the backup ariberisha14 has this json
point_time_series.json_restore_version.txt
and shadow version of restore backup (ariberisha14)
config.yml that is set on all servers!
and destination sever logs after using
clickhouse-backup restore_remote --rm ariberisha14
Do you have the same
path
field in your /etc/clickhouse-server/config.yml on multiple servers?
Yes!
Original Version of point_time_series.json point_time_series.json_original.txt
After the command was done, the backup ariberisha14 has this json
point_time_series.json_restore_version.txt
Both the files contains malformed not valid JSON are you sure you share original content?
Sorry copy paste didnt work so well! My bad, here you go this is the original , and how it generates when doing a full backup point_time_series.json.txt
sorry, i still don't understand you shared point_time_series.json which store on source server during create after upload + download it should have another format
could you share
clickhouse-backup print-config
for source and destination server?
Source:
general:
remote_storage: gcs
max_file_size: 1099511627776
disable_progress_bar: false
backups_to_keep_local: 0
backups_to_keep_remote: 0
log_level: info
allow_empty_backups: false
download_concurrency: 2
upload_concurrency: 2
restore_schema_on_cluster: ""
upload_by_part: true
download_by_part: true
clickhouse:
username: kodedev
password: XXX
host: 172.16.33.11
port: 9000
disk_mapping: {}
skip_tables:
- system.*
timeout: 30m
freeze_by_part: false
secure: false
skip_verify: false
sync_replicated_tables: true
log_sql_queries: true
config_dir: /etc/clickhouse-server/
restart_command: systemctl restart clickhouse-server
ignore_not_exists_error_during_freeze: true
tls_key: ""
tls_cert: ""
tls_ca: ""
debug: false
gcs:
credentials_file: /etc/google/gcs.json
credentials_json: ""
bucket: clickhouse-1-backup
path: clickhouse-dev-1-backup
compression_level: 1
compression_format: tar
debug: false
endpoint: ""
api:
listen: localhost:7171
enable_metrics: true
enable_pprof: false
username: ""
password: ""
secure: false
certificate_file: ""
private_key_file: ""
create_integration_tables: false
allow_parallel: false
Destination:
general:
remote_storage: gcs
max_file_size: 1099511627776
disable_progress_bar: false
backups_to_keep_local: 0
backups_to_keep_remote: 0
log_level: info
allow_empty_backups: false
download_concurrency: 1
upload_concurrency: 1
restore_schema_on_cluster: ""
upload_by_part: true
download_by_part: true
clickhouse:
username: xxx
password: xxx
host: 172.16.33.11
port: 9000
disk_mapping: {}
skip_tables:
- system.*
timeout: 30m
freeze_by_part: false
secure: false
skip_verify: false
sync_replicated_tables: true
log_sql_queries: true
config_dir: /etc/clickhouse-server/
restart_command: systemctl restart clickhouse-server
ignore_not_exists_error_during_freeze: true
tls_key: ""
tls_cert: ""
tls_ca: ""
debug: false
gcs:
credentials_file: /etc/google/gcs.json
credentials_json: ""
bucket: clickhouse-1-backup
path: clickhouse-dev-1-backup
compression_level: 1
compression_format: tar
debug: false
endpoint: ""
api:
listen: localhost:7171
enable_metrics: true
enable_pprof: false
username: ""
password: ""
secure: false
certificate_file: ""
private_key_file: ""
create_integration_tables: false
allow_parallel: false
you have the same clickhouse server on source and destination config
host: 172.16.33.11
destination and source server is the same host?
clickhouse-backup
require direct access to /var/lib/clickhouse
so shall run on the same server where run clickhouse-server
So this was the problem! thankyou!
Hello again, hope you are well
I am using google cloud storage and the way I create a single table backup is with one of the commands you gave me:
clickhouse-backup create_remote --tables="bms_qa_democlient.point_time_series" ariberisha6
Backup Creation source server.txt
and I try to restore the database using restore_remote
clickhouse-backup restore -t bms_qa_democlient.point_time_series ariberisha6
i get error
error.log