Closed DonJuanMatus closed 2 years ago
could you share results for following command?
clickhouse-client -q "SELECT * FROM system.disks"
could you share results for following command?
clickhouse-client -q "SELECT * FROM system.disks"
[~]# clickhouse-client -q "SELECT * FROM system.disks"
default /var/lib/clickhouse/ 10352869376 23547904000 0 local
Is clickhouse-backup
run on the same server as clickhouse-server
or it run on different docker containers?
clickhouse-backup
need direct access to the same /var/lib/clickhouse as clickhouse-server
Is
clickhouse-backup
run on the same server asclickhouse-server
or it run on different docker containers?
Yes, clickhouse-backup runs on the same server with the instance of clickhouse-server.
clickhouse-backup
need direct access to the same /var/lib/clickhouse asclickhouse-server
The access to the directory is granted and clickhouse-backup is able to create backups in /var/lib/clickhouse/backup
[~]$ sudo -u clickhouse ls /var/lib/clickhouse/backup
2022-07-15T08-36-55
I've set 'debug: true ' in /etc/clickhouse-backup/config.yml"
s3:
access_key: "*****" # S3_ACCESS_KEY
secret_key: "********" # S3_SECRET_KEY
bucket: "clickhouse-backup" # S3_BUCKET
endpoint: "http://srv0:9919" # S3_ENDPOINT
region: us-east-1 # S3_REGION
acl: private # S3_ACL
...
debug: true
and got the folllowing output:
[~]# clickhouse-backup list
2022/07/19 04:19:45.345891 info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER'
2022/07/19 04:19:45.358445 info SELECT * FROM system.disks;
2022/07/19 04:19:45.444635 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND table='macros'
2022/07/19 04:19:45.451892 info SELECT * FROM system.macros
2022/07/19 04:19:45.603026 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: clickhouse-backup.srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Probably there's some incorrect behavior in connection to s3 ? Did I configure the 's3' block in config.yml properly ?
sudo -u clickhouse ls -la /var/lib/clickhouse/backup/2022-07-15T08-36-55
sudo -u clickhouse ls -la /var/lib/clickhouse/backup/2022-07-15T08-36-55
[~]# sudo -u clickhouse ls -la /var/lib/clickhouse/backup/2022-07-15T08-36-55
итого 4
drwxr-x--- 4 clickhouse clickhouse 57 июл 15 04:36 .
drwxr-x--- 4 clickhouse clickhouse 60 июл 15 08:18 ..
drwxr-x--- 3 clickhouse clickhouse 16 июл 15 04:36 metadata
-rw-r----- 1 clickhouse clickhouse 579 июл 15 04:36 metadata.json
drwxr-x--- 3 clickhouse clickhouse 16 июл 15 04:36 shadow
[~]#
clickhouse-backup list
is just hang or exit with code 0 ?
is
clickhouse-backup list local
works? or hangs with the same way?
is your http://srv0:9919 allow to connects?
nv -vz srv0 9919
?
Did you try do something with standard s3 client like aws-cli? with the same credentiasl?
clickhouse-backup list is just hang or exit with code 0 ? It hangs without any exit code
is
clickhouse-backup list local
works? or hangs with the same way?
It seems that clickhouse-backup list local
works correctly
[r~]# clickhouse-backup list local
2022/07/19 07:12:35.984221 info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER'
2022/07/19 07:12:35.994213 info SELECT * FROM system.disks;
2022-07-15T08-36-55 1.04KiB 15/07/2022 08:36:55 local
2022-07-15T12-18-35 1.04KiB 15/07/2022 12:18:35 local
[ ~]#
is your http://srv0:9919 allow to connects?
nv -vz srv0 9919
?Did you try do something with standard s3 client like aws-cli? with the same credentiasl?
I use minio s3 storage and connection can be established successfully via minio client from host 'srv1' which runs clickhouse server:
could you share results
clickhouse-backup print-config
without sensitive credentials?
Do you use srv1
for run clickhouse-backup
?
could you share results
clickhouse-backup print-config
without sensitive credentials?
[srv1 ~]# clickhouse-backup print-config
general:
remote_storage: s3
max_file_size: 1073741824
disable_progress_bar: true
backups_to_keep_local: 0
backups_to_keep_remote: 0
log_level: info
allow_empty_backups: false
download_concurrency: 1
upload_concurrency: 1
restore_schema_on_cluster: ""
upload_by_part: true
download_by_part: true
clickhouse:
username: default
password: ""
host: localhost
port: 9000
disk_mapping: {}
skip_tables:
- system.*
- INFORMATION_SCHEMA.*
- information_schema.*
timeout: 5m
freeze_by_part: false
freeze_by_part_where: ""
secure: false
skip_verify: false
sync_replicated_tables: true
log_sql_queries: true
config_dir: /etc/clickhouse-server
restart_command: systemctl restart clickhouse-server
ignore_not_exists_error_during_freeze: true
tls_key: ""
tls_cert: ""
tls_ca: ""
debug: false
s3:
access_key: *****
secret_key: ********
bucket: clickhouse-backup
endpoint: http://srv0:9919
region: us-east-1
acl: private
assume_role_arn: ""
force_path_style: false
path: ""
disable_ssl: true
compression_level: 1
compression_format: tar
sse: ""
disable_cert_verification: false
storage_class: STANDARD
concurrency: 1
part_size: 0
max_parts_count: 10000
allow_multipart_download: false
debug: false
gcs:
credentials_file: ""
credentials_json: ""
bucket: ""
path: ""
compression_level: 1
compression_format: tar
debug: false
endpoint: ""
cos:
url: ""
timeout: 2m
secret_id: ""
secret_key: ""
path: ""
compression_format: tar
compression_level: 1
debug: false
api:
listen: localhost:7171
enable_metrics: true
enable_pprof: false
username: ""
password: ""
secure: false
certificate_file: ""
private_key_file: ""
create_integration_tables: false
integration_tables_host: ""
allow_parallel: false
ftp:
address: ""
timeout: 2m
username: ""
password: ""
tls: false
path: ""
compression_format: tar
compression_level: 1
concurrency: 1
debug: false
sftp:
address: ""
port: 22
username: ""
password: ""
key: ""
path: ""
compression_format: tar
compression_level: 1
concurrency: 1
debug: false
azblob:
endpoint_suffix: core.windows.net
account_name: ""
account_key: ""
sas: ""
use_managed_identity: false
container: ""
path: ""
compression_level: 1
compression_format: tar
sse_key: ""
buffer_size: 0
buffer_count: 3
max_parts_count: 10000
Do you use
srv1
for runclickhouse-backup
?
Yes, clickhouse-backup
is deployed on srv1
, alongside with clickhouse-server
try to add into /etc/clickhouse-backup/config.yml
s3:
force_path_style: true
try to add into
/etc/clickhouse-backup/config.yml
s3: force_path_style: true
After setting 'force_path_style'
[srv1]# grep force_path_style /etc/clickhouse-backup/config.yml
force_path_style: true # S3_FORCE_PATH_STYLE
the behavior didn't change. With option
s3:
debug: true
command clickhouse-backup list
sends several similar requests to minio server and doesn't view any result or error:
[srv1]# clickhouse-backup list
2022/07/19 15:35:20.534104 info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER'
2022/07/19 15:35:20.541774 info SELECT * FROM system.disks;
2022/07/19 15:35:20.556335 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND table='macros'
2022/07/19 15:35:20.572223 info SELECT * FROM system.macros
2022/07/19 15:35:20.577780 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220719/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220719T193520Z
Accept-Encoding: gzip
...
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220719/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220719T193903Z
Accept-Encoding: gzip
-----------------------------------------------------
Looks weird you should get a response before second request
Could you share full s3 debug output?
Looks weird you should get a response before second request
Could you share full s3 debug output?
Here's the full output:
[srv1 ~]# clickhouse-backup list
2022/07/20 03:54:32.508888 info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER'
2022/07/20 03:54:32.527803 info SELECT * FROM system.disks;
2022/07/20 03:54:32.626673 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND table='macros'
2022/07/20 03:54:32.647095 info SELECT * FROM system.macros
2022/07/20 03:54:32.788604 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075432Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:32.837441 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075432Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:32.958085 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075432Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:33.208885 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075433Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:33.474108 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075433Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:33.969694 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075433Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:35.775749 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075435Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:38.380929 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075438Z
Accept-Encoding: gzip
-----------------------------------------------------
2022/07/20 03:54:43.640098 info DEBUG: Request s3/ListObjectsV2 Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /clickhouse-backup?delimiter=%2F&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: srv0:9919
User-Agent: aws-sdk-go/1.43.0 (go1.18.3; linux; amd64)
Authorization: AWS4-HMAC-SHA256 Credential=admin/20220720/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=***
X-Amz-Content-Sha256: ***
X-Amz-Date: 20220720T075443Z
Accept-Encoding: gzip
-----------------------------------------------------
Finally found out that there was some tcp connectivity trouble with minio server. After resolving it everything works fine. I succesfully uploaded backups to minio and now 'list' command works fine. But I was a bit concerned about insufficient information in s3 debug output of clickhouse-backup. Probably it would be nice to make s3 debug more interactive.
@DonJuanMatus maybe root reason default max retries 30 for s3
@DonJuanMatus maybe root reason default max retries 30 for s3
Yes, apparently this value should be decreased to 5 or 10.
Thanks for the investigation!
Hi, I've installed clickhouse-backup and successfully created a backup
But the 'list' command doesn't work properly
It does not view anything further.
The 'upload' command hangs the same way as well:
I've configured it to upload backups to s3
Versions: