Closed plmuon closed 7 years ago
Try to re-run it, the trace indicates that some data was first read OK but a later read of the same data was corrupted. More importantly, check your logs (dmesg, smartctl) if there are issues with the disks, either it's a coincidence or indicates bigger trouble with a disk.
(TN: I deleted your earlier identical comment + my reply in #469)
Thanks I noticed the issue was closed, thus created this issue afterwards. I had already run it twice, once locally, once remote.
The data themselves are on a QNAP NAS. It's dmesg has no relevant entries, smartctl doesn't exist like on linux but the NAS's disk status is OK. It's an mdraid raid6 that is scrubbed regularly, I don't think there is anything wrong with the disks.
There may have been connection issues while creating some of the archives (I had NFS mounted through ssh for a while) that might have caused corruption... But borg check apparently cannot deal with it anymore.
Hmm, could we just remove compact_segments() from borg check --repair? It rather looks like an optimization that is not strictly needed for checking. see #2294.
About this specific issue: it indeed looks like the segment was successfully read first (in the check part) and only failed the second time (in the compact part).
@plmuon any news on this?
i tend to close this as it does not look like a borg issue, but rather some hw (or other "below borg") issue.
No news, I had to recreate the archive, since then I've had no issues.
ok, thanks for the feedback.
Hello,
I had a data integrity error at the end of a very long (12TB) borg check --repair:
I was doing the check first on the box itself (local filesystem on a NAS, borg in a docker container), then on another server that has the borg-backup mounted through NFS (2 days each).
I fear I have to re-create the archive, no fix possible?