Closed luizkowalski closed 7 months ago
I noticed that your configuration of endpoint is different from https://github.com/eeshugerman/postgres-backup-s3/issues/29#issuecomment-1533355629
I assume MaxMls has left out that part.
my endpoint is working fine; the problem is, and that's a guess, the behavior of the API: S3 returns "nothing" when it doesn't find the files, while R2 returns an error.
this is the output of a manual backup:
ERROR (SSHKit::Command::Failed): Exception while executing on host accessories: docker exit status: 254
docker stdout: Creating backup of sumiu_production database...
Uploading backup to pg-sumiu-backups...
upload: ./db.dump to s3://pg-sumiu-backups/backup/sumiu_production_2024-03-03T10:35:12.dump
Backup complete.
Removing old backups from pg-sumiu-backups...
docker stderr: An error occurred (NoSuchKey) when calling the ListObjects operation: The specified key does not exist.
notice the Backup complete.
and how my file IS there
the error is triggered after the upload because this command doesn't expect an error when using S3
Try to remove "/sumiu-files" from the end of S3_ENDPOINT. My guess is that It can work on backup (with or without "/sumiu-files") but error on list when url has "/sumiu-files".
Here is my configuration. It works for backup, restore and removing old backups (at least when there are no backups to remove)
SCHEDULE: '@daily'
BACKUP_KEEP_DAYS: 10
PASSPHRASE:
S3_ENDPOINT: https://cloudflare-account-id.r2.cloudflarestorage.com
S3_REGION: auto
S3_ACCESS_KEY_ID:
S3_SECRET_ACCESS_KEY:
S3_BUCKET: bucketname
S3_PREFIX: foldername
POSTGRES_HOST:
POSTGRES_DATABASE:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
you were absolutely right, this was indeed the issue. Thanks a lot @akalitenya!
I have the following configuration:
This is deployed with Kamal, btw
When I run the backup command like this:
kamal accessory exec backups "sh backup.sh"
I get this error
I was using S3 but I'm trying to change to cloudlfare's R2. My first suspicion was that it had some kind of persistence and tried to delete some "known" backup that exists on S3 but not on R2 but checking the script doesn't look like it; it looks more like an inconsistency on the API level, where S3 returns nothing, and R2 returns an error.
Do you think something can be done on the removal part of the script? If the query is empty, instead of pipe it
aws $aws_args
, just skip it.It is worth noting that even though the command failed, the backup is there on R2, so I guess this will stop failing in 7 days
btw, thanks for this gem of a project, really nice!