Closed ovizii closed 1 day ago
I reverted those changes I mentioned above and ran the backup manually. The backup succeeded but the forget part now fails:
backrest | 2024-06-14T10:30:23.750+0200 WARN forget for plan "OneDrive1TB" in repo "Documents" found legacy snapshots without instance ID, recommending legacy forget behavior.
backrest | 2024-06-14T10:30:23.750+0200 WARN forget for plan "OneDrive1TB" in repo "Documents" forgetting snapshots without instance ID, using legacy behavior (e.g. --tags not including instance ID)
backrest | 2024-06-14T10:30:23.750+0200 WARN forget for plan "OneDrive1TB" in repo "Documents" to avoid this warning, tag all snapshots with the instance ID e.g. by running:
backrest | restic tag --set 'plan:Documents' --set 'created-by:pve02' --tag 'plan:Documents'
backrest | 2024-06-14T10:30:26.449+0200 ERROR task failed {"task": "forget for plan \"OneDrive1TB\" in repo \"Documents\"", "error": "forget: get snapshots for repo OneDrive1TB: command \"/bin/restic-0.16.4 forget --json --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 3 --keep-yearly 5 -o sftp.args=-oBatchMode=yes --tag plan:Documents --group-by \" failed: exit status 1", "duration": "2.716963852s"}
backrest | 2024-06-14T10:30:26.449+0200 INFO running task {"task": "collect garbage"}
backrest | 2024-06-14T10:30:26.475+0200 INFO collecting garbage {"operations_removed": 0}
Hey, is the initial error you saw transient or reproducible? Looks like a storage layer error but could also be some interaction with the settings you're enabled (e.g. does it only show up when using nice / ionice and go away otherwise?).
Not quite sure. I had reverted the changes mentioned above and then all 3 jobs with a cron schedfule failed with the same error:
command: /bin/restic-0.16.4 forget --json --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 3 --keep-yearly 5 -o sftp.args=-oBatchMode=yes --tag plan:Documents --group-by
unable to create lock in backend: repository is already locked by PID 20584 on backrest by root (UID 0, GID 0)
lock was created at 2024-06-14 10:27:41 (16h33m24.140672018s ago)
storage ID d7663af0
the `unlock` command can be used to remove stale locks
command "/bin/restic-0.16.4 forget --json --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 3 --keep-yearly 5 -o sftp.args=-oBatchMode=yes --tag plan:Documents --group-by " failed: exit status 1
This is weird, I though backrest serializes plans. Also, I had never had to check the "auto unlock" option as backrest is the only instance which accesses this repo.
Running the three plans manually now. Will reply after I know more.
The prune and check of the repo also failed with the same "repo locked, remove lock" error.
manual backups worked, manual prune and manual checks worked. Will close this issue if the cron schedule tomorrow does not yield anythign new.
Hey, circling back on this -- any luck with use of auto unlock / over time? It may also be worth updating to 1.2.0 if you've not already as this fixed a rare deadlock which seemed to leave repos in bad lock states (in rare cases) -- should now be patched.
Sorry about that, I totally forgot to reply and close this ticket.
Describe the bug One of my three plans has started giving this error on its nightly run.
The recent changes made to backrest are these and I will revert them to see if it makes any difference.
To Reproduce Steps to reproduce the behaviour:
Not sure, error just popped up after months of perfectly working.
Expected behaviour I'd expect my plans to execute smoothly.
Platform Info
Additional context
Docker logs:
Plan logs: