Closed avpengage closed 1 month ago
Hey, sorry you're running into trouble here.
Backrest added some extra checks that it isn't implicitly creating repositories when trying to open them (e.g. to prevent a scenario where a user accidentally edits their URI and is then backing up to a location they're not aware of...) -- it's odd that you'd run into a problem here though if you've been successfully backing up for a while.
One thing that stands out to me is that the path s3.us-west-002.backblazeb2.com/config
doesn't quite look expected. I'd think there should be a bucket name there somewhere -- did you edit your settings recently? Does your URI look correct?
thanks for the quick response. Everything had been running nicely, then last night I figured I'd upgrade to the latest version, and nothing but trouble since :-(
In the container, in the config folder, there are a lot of old revisions of the config, all of them just have that s3 bucket. Here's the repo from the config:
{ "id": "backblaze", "uri": "s3.us-west-002.backblazeb2.com", "password": "xxxxxxxxx", "env": [ "AWS_ACCESS_KEY_ID=0021cb7c...", "AWS_SECRET_ACCESS_KEY=K002..." ], "prunePolicy": { "schedule": { "maxFrequencyDays": 1, "clock": "CLOCK_LAST_RUN_TIME" }, "maxUnusedPercent": 25 }, "checkPolicy": { "readDataSubsetPercent": 0 }, "commandPrefix": {} }
So I don't believe the config changed... I did try prefixing the uri with s3:// and s3: - neither made any difference...
I guess I was using restic 0.16 prior to this, now using 0.17 - nothing required on my part for that version change?
Any other ideas?
Update:
I just logged into backblaze and the bucket is now 0 bytes? This had to have had data in it, because I actually restored files from it about a month ago... It is possible I somehow deleted the contents by upgrading? Something to do with me not having something after the s3 address?
The other site I upgraded - its s3 bucket is also 0 bytes now? Also, all config files have nothing after the s3 address...
Thanks
Hey, sorry you're running into this -- that's definitely alarming.
It's unlikely that your Backrest upgrade is related to what you're seeing -- for context Backrest never accesses your data directly (and in fact doesn't actually understand the URI or credentials you provide). All access is done through restic commands.
It's pretty hard to say what happened here, but there isn't a command that backrest uses that would remove the repo itself / drop your storage to 0 bytes. The most destructive operations performed are forget and prune, but neither of these would ever remove the config / repo definition.
If you are certain you had data in that bucket, it's worth seeing if there's any bucket history available / way to check when the storage dropped to zero bytes? You may want to reach out to backblaze support if you've seen data loss to understand when the bucket was deleted and what investigative options are available.
The uri from your s3 bucket dosen't look correct to me either -- it should be something of the form s3:
. If you provided the URI s3.us-west-002.backblazeb2.com
this would likely be interpreted by restic as a local filesystem path e.g. the folder in the current directory named s3.us-west-002.backblazeb2.com
(since .
is actually valid in filenames on many (all?) linux systems).
It's possible that your backups were going to a local folder inside the docker container and were being lost on restarts? If that's the case, it's probably showing up after the upgrade because of the safety checks I added to avoid implicitly reinitializing repositories in this situation.
I think you may have hit the nail on the head - another instance I have running I was looking at today, and in the docker container, there is a folder s3.us-west-002.backblazeb2.com. So you are likely right, the other 2 probably did have the backups stored in the local directory.
So at this point, can I just add s3: to prefix the uri and it will start backing up remotely? or should I delete the repo and start from scratch or what do you suggest? I do have a bunch of backups configured for that repo, so ideally i'd prefer to not to have to recreate everything... Maybe I can change the uri and issue the init command manually or something?
Also, I wonder if others have done this as well? I don't know if there is a way you could check if the uri looks like not a file systems path? in this case 's3' in the uri might be a good clue to pick up on in the sw... Or maybe I'm just stunned... :-)
Thanks
Your experience here definitely has me thinking that Backrest's UI should include a best effort warning or similar when using a local repo to ensure that users have checked the path and that it's what they intended. I've definitely seen other similar reports and it's perhaps the biggest risk / footgun of running backup software in a docker container where there's a lot of filesystem configuration involved. I may simply require that users explicitly provide a local:
prefix which is supported by restic instead of allowing restic to infer that a path is a local.
Re: your best fix. I'd recommend recreating your repo with a different and reinitializing to test everything works. I'd also recommend using the b2:
syntax instead of s3:
if you're backing up to backblaze -- but either should work.
Once you've confirmed that your repo is created and working it's safe to shutdown backrest, edit your config.json, and set the repo in each of your existing plans to the new repo. Essentially just find and replace to switch everything over.
I'd also recommend using the b2: syntax instead of s3: if you're backing up to backblaze -- but either should work.
I stumbled across this issue by chance/boredom, but this statement struck me as odd. Any particular reason you recommend the b2: syntax? The restic docs explicitly caution against that and tells you to use s3 instead. Backblaze themselves also recommend the s3 implementation as opposed to the b2 one.
Source: https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#backblaze-b2
Ah, good callout -- I hadn't seen that guidance before. For a long time the backblaze's b2 API was more performant but from https://www.backblaze.com/docs/cloud-storage-s3-compatible-api it sounds like this isn't the case anymore either. Sounds like s3 is indeed the preferred way to do this.
Discussed in https://github.com/garethgeorge/backrest/discussions/487