Percona-Lab / mongodb_consistent_backup

A tool for performing consistent backups of MongoDB Clusters or Replica Sets
https://www.percona.com
Apache License 2.0
276 stars 81 forks source link

S3 upload failed with code 403 forbidden #265

Open jshomm opened 6 years ago

jshomm commented 6 years ago

Hi,

I like your tool for backing up mongodb very much. Everything works fine fine including S3 upload when the backup finishes quite quick. But when I try to backup and upload a bigger mongodb, where the backup process itself takes approx 8h, I got the following error in the upload task:

[CRITICAL] [PoolWorker-2] [S3UploadThread:run:127] AWS S3 upload failed after 5 retries! Error: S3ResponseError: 403 Forbidden <?xml version="1.0" encoding="UTF-8"?>

I already created a role in IAM to enhance the maximum cli/api session duration to prevent expiering session token. Does anyone have an idea how to fix this issue?

Thanks in advance!

dbmurphy commented 6 years ago

Does boto have any similar errors reported given this is using that python module I would be it or the s3 module would be tracking an error for this unless we are hitting a max segment size in the upload due to its size.

Sent from my iPhone

On Jul 5, 2018, at 1:00 PM, jshomm notifications@github.com wrote:

Hi,

I like your tool for backing up mongodb very much. Everything works fine fine including S3 upload when the backup finishes quite quick. But when I try to backup and upload a bigger mongodb, where the backup process itself takes approx 8h, I got the following error in the upload task:

[CRITICAL] [PoolWorker-2] [S3UploadThread:run:127] AWS S3 upload failed after 5 retries! Error: S3ResponseError: 403 Forbidden

I already created a role in IAM to enhance the maximum cli/api session duration to prevent expiering session token. Does anyone have an idea how to fix this issue?

Thanks in advance!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

dschneller commented 5 years ago

For what it's worth: I have been using the S3 upload successfully for the last few days (from inside an EC2 instance) and purposefully made the process wait past the expiration time of the temporary credentials. It refreshed the credentials without problems. Log extract:

...
[2018-11-21 07:26:35,926] [Tar:done:46] Archiving completed for: /data/percona-backup/staging_staging/20181120_1926/replicaset02
...
[2018-11-21 07:26:36,056] S3:run:66] Starting AWS S3 upload to <bucket_name_redacted> (4 threads, 50mb multipart chunks, 5 retries)
...
[2018-11-21 07:26:36,058] [connection:new_http_connection:745] establishing HTTPS connection: host=s3.eu-central-1.amazonaws.com, kwargs={'port': 443, 'timeout': 15}
[2018-11-21 07:26:36,058] [provider:_credentials_need_refresh:260] Credentials need to be refreshed.
[2018-11-21 07:26:36,059] [provider:_populate_keys_from_metadata_server:383] Retrieving credentials from metadata server.
[2018-11-21 07:26:36,061] [provider:_populate_keys_from_metadata_server:404] Retrieved credentials will expire in 4:42:11.938262 at: 2018-11-21T12:08:48Z
...
[2018-11-21 07:26:37,271] [S3UploadPool:complete:166] Uploaded AWS S3 key successfully: s3://....
...

So maybe this was fixed since July, or it was an unrelated problem?

timvaillancourt commented 5 years ago

@jshomm this may be resolved in recent updates. Can you please retest with 1.4.1?