Closed sixinyiyu closed 2 years ago
Hi, has anything been done about this? I'm seeing the same behavior, where I get nothing in the log (and no errors), and the shadow directory never show up in the backup directory. I'm running clickhouse-server cluster in docker containers, and running clickhouse-backup as a docker container on the server that hosts the clickhouse-server container. I do see that the shadows directories are being created on my host server in /var/lib/docker/volumes/clickhouse/_data/shadow/, but these never appear in my backup directory, so the restore recreates the table, but always without data. It appears that clickhouse-backup is not able to locate the shadow directory when it's mapped as a docker volume? Should the disk_mapping variable be mapped to the docker volume rather than /var/lib/clickhouse?
thanks, david
change backup local directory, have no sense
backup directory shall be on the same disk and file system
and local backup /var/lib/clickhouse/backup/shadow/db/table/part_name/*
it's a just hardlinks to /var/lib/clickhouse/data/db/table/part_name/*
I'm running clickhouse-server cluster in docker containers use the same volumes for
clickhouse-server
andclickhose-backup
container example docker-composeservices: clickhouse: image: clickhouse/clickhouse-server:latest volumes: - /var/lib/clickhouse:/var/lib/clickhouse clickouse_backup: image: altinity/clickhouse-backup:latest volumes: - /var/lib/clickhouse:/var/lib/clickhouse
ah, thanks, I figured is something simple like that... It works now.
A few questions if you don't mind: 1) I'd like to have these backup's placed some place else, not on the same server.. Should these just be copied? I saw the note about rsync... is that the recommended method? 2) Is the variable "backups_to_keep_local" to set the total number of backups kept locally, and the program deletes/cleans up on a rolling basis (so it's keeping just the list X number of backups)? 3) If enabling, for example, an S3 bucket, does it only get a copy of the current "create" (i.e. a backup of the backup) and unaffected by the variable "backups_to_keep_local", but I will still get a local copy? In other words, can I use remote for the backup I was referring to in number 1, while keeping a small number local?
thanks again for your quick response and the work you guys are doing!! best, david
I'd like to have these backup's placed some place else, not on the same server.. Should these just be copied? I saw the note about rsync... is that the recommended method?
yes, rsync is good enough
on planned 2.x release we implements remote_storage: custom
which could help to automate this approach
Is the variable "backups_to_keep_local" to set the total number of backups kept locally, and the program deletes/cleans up on a rolling basis (so it's keeping just the list X number of backups)?
yes, when this variable is not zero, then clickhouse-backup
will delete old backups during execute create
command
If enabling, for example, an S3 bucket,
What exactly you mean? Do you mean remote_storage: s3
? or something else?
If enabling, for example, an S3 bucket, does it only get a copy of the current "create" (i.e. a backup of the backup) and unaffected by the variable "backups_to_keep_local", but I will still get a local copy? In other words, can I use remote for the backup I was referring to in number 1, while keeping a small number local?
yes, you can
backups_to_keep_local: 1
backups_to_keep_remote: 7
ususal workflow
yes, remote storage. I was wondering if, for example, I could keep a rolling 30 days local, and on the last day of each month, do a local and remote copy with the same create command. Using remote storage for long term backup storage. So then my remote storage would end up being a copy of each month end. Then I could use the "backups_to_keep_remote", set to 12, to keep a rolling year in backups.
Avoid to store lot of local backups, you just will allocate useless disk space
unfortunately, only simple keep last X
backup retention policy currently implements
you are welcome to make PR
version info
┌─name────┬─path─────────┬───free_space─┬──total_space─┬─keep_free_space─┬─type──┐ │ default │ /data01/ckk/ │ 147627498496 │ 422621649920 │ 1024 │ local │ │ disk1 │ /data02/ckk/ │ 104120437760 │ 422621649920 │ 1024 │ local │ └─────────┴──────────────┴──────────────┴──────────────┴─────────────────┴───────┘
config.yml
the result :
the log
this nothing of
shadow moved
log