Open usma0118 opened 2 months ago
@usma0118 i got something similar:
[DB] Moving backup to external storage with blobxfer
mv: cannot stat '/tmp/backups/01_dbbackup.FrSQds/*.': No such file or directory
mv: preserving times for '/backup/myname_20241022-085939.sql.gz': Operation not permitted
mv: preserving permissions for ‘/backup/myname.sql.gz’: Operation not permitted
and my env vars:
CONTAINER_ENABLE_MONITORING : false
DB_CLEANUP_TIME : 10080
DB_HOST : mysql-xxx.mysql.database.azure.com
DB_NAME : myname01,myname02
DB_PASS : secret(backup-settings)[DB_PASS]
DB_TYPE : mysql
DB_USER : backup
DEFAULT_BACKUP_BEGIN : 0130
DEFAULT_BACKUP_LOCATION : blobxfer
DEFAULT_BLOBXFER_MODE : file
DEFAULT_BLOBXFER_REMOTE_PATH : my-backup-path
DEFAULT_BLOBXFER_STORAGE_ACCOUNT : myaccount-dev001
DEFAULT_BLOBXFER_STORAGE_ACCOUNT_KEY : secret(backup-settings)[BLOBXFER_STORAGE_ACCOUNT_KEY]
DEFAULT_CHECKSUM : NONE
DEFAULT_COMPRESSION : GZ
DEFAULT_DEBUG_MODE : false
DEFAULT_EXTRA_OPTS : --complete-insert --no-create-db
DEFAULT_MYSQL_CLIENT : mysql
DEFAULT_SPLIT_DB : true
TIMEZONE : Europe/Amsterdam
As far as i can see nothing fancy so no idea what goes wrong here. I end up with broken backups all the time. The backup itself is fine however the copy goes completely wrong here and the remote receives a file with 0 bytes with exit code 0
@tiredofit got an idea what this could be? Seems user related? Then i go inside the container i start the process default as "root"?
UPDATE What i do notice is this line:
if [ "${backup_job_checksum}" != "none" ] ; then run_as_user mv "${temporary_directory}"/*."${checksum_extension}" "${backup_job_filesystem_path}"/; fi
While there is no checksum_extension set it will move *.
which ofc causes an error i think. When digging further i noticed the main issue is the permissions and the user. The scripts runs things as a different user and the Azure volume in this case is SMB. Therefor it cannot access the files generated. I now set the user DBBACKUP_USER
to be root and all works fine (and the checksum is just throwing an error and therefor skipped, guess that needs to be addressed).
Summary
Old backups not cleaned up with DEFAULT_CLEANUP_TIME set to 3 and split_db option set.
Steps to reproduce
What is the expected correct behavior?
Relevant logs and/or screenshots
Environment
Kubernetes
Any logs | docker-compose.yml
Possible fixes