Closed michael-liumh closed 2 years ago
@michael-liumh Unfortunately, COS storage not tested enough, currently.
Could you share debug logs?
LOG_LEVEL=debug ./clickhouse-backup download 1111111 -c config.yml
LOG_LEVEL=debug ./clickhouse-backup download 1111111 -c config.yml
2021/12/14 20:43:28 debug SELECT value FROM system
.build_options
where name='VERSION_INTEGER'
2021/12/14 20:43:28 debug SELECT FROM system.disks;
2021/12/14 20:43:28 debug SELECT value FROM system
.build_options
where name='VERSION_INTEGER'
2021/12/14 20:43:28 debug SELECT FROM system.disks;
2021/12/14 20:43:28 debug SELECT * FROM system.disks;
HEAD / HTTP/1.1
Host: test-1301702152.cos.ap-guangzhou.myqcloud.com
Authorization: q-sign-algorithm=sha1&q-ak=AKIDEDxReaEcjZ7OP9bmgJgmcBUWHmzvaz7x&q-sign-time=1639485808;1639489408&q-key-time=1639485808;1639489408&q-header-list=host&q-url-param-list=&q-signature=17ca21d1c3af5c01362f15e5e15913c7931f59a4
User-Agent: cos-go-sdk-v5/0.7.30
HTTP/1.1 200 OK Connection: keep-alive Content-Type: application/xml Date: Tue, 14 Dec 2021 12:43:28 GMT Server: tencent-cos X-Cos-Bucket-Region: ap-guangzhou X-Cos-Request-Id: NjFiODkxNzBfNDUzMTI3MGJfMjliNTZfZWZlNmJiZA== Content-Length: 0
GET /?delimiter=%2F&prefix=test-1301702152 HTTP/1.1 Host: test-1301702152.cos.ap-guangzhou.myqcloud.com Authorization: q-sign-algorithm=sha1&q-ak=AKIDEDxReaEcjZ7OP9bmgJgmcBUWHmzvaz7x&q-sign-time=1639485808;1639489408&q-key-time=1639485808;1639489408&q-header-list=host&q-url-param-list=delimiter;prefix&q-signature=e78ed9543105812f9e14e3ab267a22ea4bc4e1e7 User-Agent: cos-go-sdk-v5/0.7.30
HTTP/1.1 200 OK Content-Length: 240 Connection: keep-alive Content-Type: application/xml Date: Tue, 14 Dec 2021 12:43:28 GMT Server: tencent-cos X-Cos-Bucket-Region: ap-guangzhou X-Cos-Request-Id: NjFiODkxNzBfNDUzMTI3MGJfMjliNGFfZjAzMzZjNA==
2021/12/14 20:43:28 error '1111111' is not found on remote storage
Unfortunately, issue can't be resolved immediately
try to use https://rclone.org/s3/#tencent-cos
instead of clickhouse-backup upload / download
or remote_storage: s3
instead COS directly
Unfortunately, issue can't be resolved immediately
try to use
https://rclone.org/s3/#tencent-cos
instead ofclickhouse-backup upload / download
orremote_storage: s3
instead COS directly
ok, thanks, I can use python sdk to upload and download .
@michael-liumh sorry for late reply, could you try to 1.5.0 version and share results for the following command (without sensitive credentials)?
clickhouse-backup print-config
@michael-liumh sorry for late reply, could you try to 1.5.0 version and share results for the following command (without sensitive credentials)?
clickhouse-backup print-config
same error
2022/08/11 17:09:32.229950 debug start uploadTableData biz.quan_ext_contact_follow with concurrency=1 len(table.Parts[...])=58
2022/08/11 17:09:32.238120 debug start upload 2586 files to test20220811/shadow/biz/quan_ext_contact_follow/default_1.tar
2022/08/11 17:20:37.949874 error UploadCompressedStream return error: PUT https://test-ops-xxx.cos.ap-guangzhou.myqcloud.com/clickhouse_backups/ck1/test20220811/shadow/biz/quan_ext_contact_follow/default_1.tar: 400 EntityTooSmall(Message: Your proposed upload is smaller than the minimum allowed object size., RequestId: NjJmNGM3NGNfMmViNWZiMDlfNGRjZl9hYmYzZGE=, TraceId: OGVmYzZiMmQzYjA2OWNhODk0NTRkMTBiOWVmMDAxODc0OWRkZjk0ZDM1NmI1M2E2MTRlY2MzZDhmNmI5MWI1OTQyYWVlY2QwZTk2MDVmZDQ3MmI2Y2I4ZmI5ZmM4ODFjMWUwY2I0ZjhhODAxOTMwNWQ3OGVkYjBkYmEwZjNhMWQ=)
2022/08/11 17:20:37.949958 error can't acquire semaphore during Upload table: context canceled backup=test20220811 operation=upload
2022/08/11 17:20:37.950784 debug start uploadTableData biz.quan_ext_contact_follow_tmp_20220424 with concurrency=1 len(table.Parts[...])=1
2022/08/11 17:20:37.953169 debug start upload 57 files to test20220811/shadow/biz/quan_ext_contact_follow_tmp_20220424/default_1.tar
2022/08/11 17:20:39.134355 debug finish upload to test20220811/shadow/biz/quan_ext_contact_follow_tmp_20220424/default_1.tar============================================================================] 100.00% 1s
2022/08/11 17:20:39.134391 debug finish uploadTableData biz.quan_ext_contact_follow_tmp_20220424 with concurrency=1 len(table.Parts[...])=1 metadataFiles=map[default:[default_1.tar]], uploadedBytes=11145216
2022/08/11 17:20:39.310809 info done backup=test20220811 duration=1.361s operation=upload size=10.63MiB table=biz.quan_ext_contact_follow_tmp_20220424
2022/08/11 17:20:39.310873 error one of upload go-routine return error: one of uploadTableData go-routine return error: can't upload: PUT https://test-ops-xxx.cos.ap-guangzhou.myqcloud.com/clickhouse_backups/ck1/test20220811/shadow/biz/quan_ext_contact_follow/default_1.tar: 400 EntityTooSmall(Message: Your proposed upload is smaller than the minimum allowed object size., RequestId: NjJmNGM3NGNfMmViNWZiMDlfNGRjZl9hYmYzZGE=, TraceId: OGVmYzZiMmQzYjA2OWNhODk0NTRkMTBiOWVmMDAxODc0OWRkZjk0ZDM1NmI1M2E2MTRlY2MzZDhmNmI5MWI1OTQyYWVlY2QwZTk2MDVmZDQ3MmI2Y2I4ZmI5ZmM4ODFjMWUwY2I0ZjhhODAxOTMwNWQ3OGVkYjBkYmEwZjNhMWQ=)
backup command:
LOG_LEVEL=debug ./clickhouse-backup -c config.yml create_remote test20220811
./clickhouse-backup -version
Version: 1.5.0 Git Commit: 4f2dbfcea34eab7edce42a48a26845b8d02cdfb3 Build Date: 2022-07-30
cat config.yml
general:
remote_storage: cos
max_file_size: 0
disable_progress_bar: false
backups_to_keep_local: 1
backups_to_keep_remote: 7
log_level: info
allow_empty_backups: false
download_concurrency: 1
upload_concurrency: 1
restore_schema_on_cluster: ""
upload_by_part: false
download_by_part: false
clickhouse:
username: default
password: "xxxxx"
host: localhost
port: 9000
disk_mapping: {}
skip_tables:
- system.*
- INFORMATION_SCHEMA.*
- information_schema.*
timeout: 30m
freeze_by_part: false
freeze_by_part_where: ""
secure: false
skip_verify: false
sync_replicated_tables: false
log_sql_queries: true
config_dir: /etc/clickhouse-server/
restart_command: systemctl restart clickhouse-server
ignore_not_exists_error_during_freeze: true
tls_key: ""
tls_cert: ""
tls_ca: ""
debug: false
cos:
url: "https://test-ops-xxx.cos.ap-guangzhou.myqcloud.com"
timeout: 24h
secret_id: "xxx"
secret_key: "xxx"
path: "clickhouse_backups/ck1/"
compression_format: tar
compression_level: 1
debug: false
s3:
access_key: "xxx"
secret_key: "xxx"
bucket: "test-ops-xxx"
endpoint: "https://cos.ap-guangzhou.myqcloud.com"
region: ap-guangzhou
acl: default
assume_role_arn: ""
force_path_style: false
path: "clickhouse_backups/ck1/"
disable_ssl: false
compression_level: 1
compression_format: tar
sse: ""
disable_cert_verification: false
storage_class: STANDARD
concurrency: 4
part_size: 16777216
max_parts_count: 10000
allow_multipart_download: false
debug: false
I think you can use client.Object.Upload
instead of client.Object.Put
in cos.go:140, because client.Object.Put
has limit of minimum size.
Thanks a lot for suggestion, I will try!
Your original topic
https://github.com/AlexAkulov/clickhouse-backup/issues/316#issuecomment-993503130
was about download
not upload
failed upload to cos
is duplicated with https://github.com/AlexAkulov/clickhouse-backup/issues/464
ooh, sorry!
I tested it, it's worked at 1.5.0
Could you clarify, upload also worked for COS or not?
upload got the same error.
---Original--- From: "Eugene @.> Date: Fri, Aug 12, 2022 18:42 PM To: @.>; Cc: @.>;"State @.>; Subject: Re: [AlexAkulov/clickhouse-backup] could not download backup from cos (Issue #316)
Could you clarify, upload also worked for COS or not?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>