rancher / backup-restore-operator

Apache License 2.0
100 stars 69 forks source link

Backup pod tries listing the S3 bucket, even though write credentials should be sufficient. #573

Open DovydasNavickas opened 2 months ago

DovydasNavickas commented 2 months ago

Rancher Server Setup

Describe the bug The Backup pod logs say (credentials and URL are random GUIDs):

INFO[2024/09/07 21:41:26] Compressing backup CR rancher-s3-recurring-backup
INFO[2024/09/07 21:41:27] invoking set s3 service client
insecure-tls-skip-verify=false s3-accessKey=8d802dc25b7143aea5aaa1e7297daa93 s3-bucketName=rancher-backups s3-endpoint=acf1fc67a82942a7be3aefe7406f947a.eu.r2.cloudflarestorage.com s3-endpoint-ca= s3-folder=backups s3-region=auto
ERRO[2024/09/07 21:41:27] error syncing 's3-recurring-backup': handler backups: failed to check if s3 bucket [rancher-backups] exists, error: 401 Unauthorized, requeuing 

To Reproduce Steps to reproduce the behavior:

  1. Create a bucket in Cloudflare R2

  2. Create token with Object Read & Write permissions image

  3. Set S3 credentials for the Backup

  4. Apply configuration, wait for backup pod to proceed and observe the error

The problem is that backup pod tries to list the buckets, even though I set a specific bucket for it in the Backup specification:

apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
  name: rancher-s3-recurring-backup
spec:
  storageLocation:
    s3:
      credentialSecretName: rancher-backup-s3
      credentialSecretNamespace: cattle-resources-system
      bucketName: rancher-backups
      folder: backups
      region: auto
      endpoint: acf1fc67a82942a7be3aefe7406f947a.eu.r2.cloudflarestorage.com
  resourceSetName: rancher-resource-set
  encryptionConfigSecretName: rancher-backup-encryption-config
  schedule: "0 6 * * *"
  retentionCount: 180

I tried setting the Admin Read & Write permissions, which is basically sudo for all buckets: image

And the credentials worked.

Expected behavior The backup should not list the buckets and should try uploading the file. This way if the permissions are sufficient, it will be able to upload the file successfully. If not, it will fail and should error out.

mallardduck commented 2 months ago

hiya @DovydasNavickas, from the looks of things I'm suspecting the issue originates from CloudFlare and the way they handle permissions between those 4 options. I suspect that they are errantly not including the s3:HeadBucket equivalent permission for the Object Read & Write option. The reason I suspect this may be an oversight/error is that level claims to "Allows the ability to read, write, and list objects in specific buckets." Meaning that users should in fact be allowed to GET the bucket to see a list of contents, and similarly should be able to HEAD the bucket to verify it exists.

We are using a fairly standard minio library for s3 client, so it's not trying to list all buckets or anything exceptional. Just a basic BucketExists call to verify before connecting. Based on the logs you provided I can see exactly where the error comes from and I suspect it should be working as is. Looking at the r2 docs they list having compatibility for this method: https://developers.cloudflare.com/r2/api/s3/api/

Given that they list the HeadBucket method as one that they support and that it's generally an equivalent to ListObjects, this is more reason I suspect that CloudFlare may have missed adding that permission to the Object Read & Write option.