garethgeorge / backrest

Backrest is a web UI and orchestrator for restic backup.
GNU General Public License v3.0
1.16k stars 37 forks source link

Unable to backup to S3 backend via Docker/Podman #354

Open DelusionalAI opened 3 months ago

DelusionalAI commented 3 months ago

I'm trying to add a backup to S3 storage, but unable to get it to work for both MinIO self hosted and Wasabi Cloud S3 Storage. I am able to backup directly to these S3 if I use restic commands myself, but unable to though Backrest.

When I add the repo it appears add it just fine. Backrest gives no errors, I can see files in the buckets, and if I manually run a backup (restic CLI) or command like ls in backrest it works fine.

But when I go to run the backup plan it errors out like its trying to create a new repo or something.

Below is the repo and plan config, with passwords redacted

{
  "id": "F3",
  "uri": "s3:http://192.168.1.131:9000/restic-backup",
  "password": "REDACTED",
  "env": [
    "AWS_ACCESS_KEY_ID=REDACTED",
    "AWS_SECRET_ACCESS_KEY=REDACTED"
  ],
  "prunePolicy": {
    "maxUnusedPercent": 25,
    "schedule": {
      "maxFrequencyDays": 30
    }
  },
  "checkPolicy": {
    "readDataSubsetPercent": 0,
    "schedule": {
      "maxFrequencyDays": 30
    }
  },
  "commandPrefix": {}
}

And the plan

{
  "id": "Podman-F3",
  "repo": "F3",
  "paths": [
    "/userdata"
  ],
  "excludes": [],
  "iexcludes": [],
  "retention": {
    "policyKeepLastN": 30
  }
}

and the error I get when I try to run the backup

error: failed to initialize repo: init failed: command "/bin/restic-0.16.4 init --json -o sftp.args=-oBatchMode=yes" failed: command "/bin/restic-0.16.4 init --json -o sftp.args=-oBatchMode=yes" failed: exit status 1
Output:
Fatal: create repository at s3:http://192.168.1.131:9000/restic-backup failed: client.BucketExists: The Access Key Id you provided does not exist in our records.

Not sure why it's trying to run init, the repo was already made wasn't it?

I do have other backup plans to STFP/Rclone storage that is working fine, it's just these S3 ones that dont work. I thought it was an issue with the way I setup MinIO until I tired it with Wasabi and restic command manually.

garethgeorge commented 3 months ago

Backrest was unable to detect that a repo already exists at that location so it is attempting to create one: https://github.com/garethgeorge/backrest/blob/main/pkg/restic/restic.go#L97-L132

From

Fatal: create repository at s3:http://192.168.1.131:9000/restic-backup failed: client.BucketExists: The Access Key Id you provided does not exist in our records.

Are you certain that your credentials in your config are correct and are the same as the ones you're using on the CLI?

Backups to an HTTPS endpoint running minio are what I use in many of my backrest deployments.

DelusionalAI commented 3 months ago

Yes I'm sure I've got the right credentials. As a test I just ran the following, and I simply copy pasted the ENV and password from the config file. I'm even using the same binary as backrest would use in the same container.

7e6f3cf9f86e:/# /bin/restic-0.16.4 --json -r s3:http://192.168.1.131:9000/restic-backup backup /userdata/traefik/docker-compose.yaml
enter password for repository:
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"status","percent_done":1,"total_files":1,"files_done":1,"total_bytes":6805,"bytes_done":6805}
{"message_type":"summary","files_new":1,"files_changed":0,"files_unmodified":0,"dirs_new":2,"dirs_changed":0,"dirs_unmodified":0,"data_blobs":1,"tree_blobs":3,"data_added":8204,"total_files_processed":1,"total_bytes_processed":6805,"total_duration":0.723187656,"snapshot_id":"ff0604e3b748323cbf3f12f8d4f06d280bcab07e8716f5358623d43e68c1aad7"}

If I then view the Repo in the back rest UI I can see the backup I just made, I just can't execute any backup plans to that repo.

image

DelusionalAI commented 3 months ago

To test further, I went into the backup plan, hit run command, and ran cat config which is what I think the repo check does (Go's not my strong suit) and it returned the following. Pressing backup right afterwards still errored out with the same message. This is backrest 1.1.0 running via rootful Podman.

command: /bin/restic-0.16.4 cat config -o sftp.args=-oBatchMode=yes
{
  "version": 2,
  "id": "e5ab604ff439a331b3c09baf3e8a5d1d11102c5a3b8e440e7de8bc5ae2b9d981",
  "chunker_polynomial": "37310c6c8a6fd7"
}
took 1.3288907s

EDIT:

I also setup the repo on backrest for Mac installed via brew and it did actually backup just fine. The issue appears to be with the docker container version of the app.

garethgeorge commented 3 months ago

It's very strange for it to work uncontainerized but not in docker (and especially that only the backup command) seems to be breaking.

It's very helpful that you're including all the debugging steps you've tried so far (in detail).

Additional ideas that come to mind

I'll give a bit of thought to how I can improve backrest's logging -- perhaps the log output should include a list of provided environment variables (values omitted for security as this is where backrest passes credentials).

DelusionalAI commented 3 months ago

Ah ha! The env in the container did the trick. When I ran it I noiced AWS keys. When I was setting it up I passed AWS keys in my Podman run/compose file, not yet realizing they were added at the command level. So it appears when backrest runs the check for exiting repos (and then tried to create one) it was using the AWS Key from the host ENV and not the ones in the Repo config. I restarted the container without the -E flags and everything started working.

So I guess what needs answered: 1) Why did backrest use the env from the host shell and not the plan config when I press backup now? Why did it use the correct env from the plan config everywhere else in the UI? 2) Why did it try to create a new repo when it failed to find an existing one? That makes since when trying to add a repo, but not when trying to run a backup plan IMO. If the repo was deleted I would hate for it to create a new one automatically and I'd have no idea if/when/how much retention I just lost.

Either way THANK YOU for the help and this awesome program. Now that I've gotten out of my own way and S3 is working, I'm off to deploy this to 3-4 more servers, and I'm happy to do any more testing or troubleshooting.

garethgeorge commented 3 months ago

I suspect there's some inconsistency in how the environment is being inherited. It is intended behavior that backrest propagates host process environment variables (e.g. from the shell), but it looks like backrest isn't being completely consistent about it. I'll leave this open to track that down / make that consistent.

mescanne commented 1 month ago

I observed a similar issue: