borgbackup / borg

Deduplicating archiver with compression and authenticated encryption.
https://www.borgbackup.org/
Other
10.94k stars 739 forks source link

evaluate redundancy / error correction options #225

Open ThomasWaldmann opened 8 years ago

ThomasWaldmann commented 8 years ago

There is some danger that bitrot and storage media defects could lead to backup data loss / repository integrity issues. Deduplicating backup systems are more vulnerable to this than non-deduplicating ones, because a defect chunk affects all backup archives using this chunk.

Currently, there is a lot of error detection (CRCs, hashes, HMACs) going on in borgbackup, but it has no built-in support for error correction (see the FAQ about why), but it could be solved maybe using one of these approaches:

If we can find some working approaches, we could add them to the documentation. Help and feedback about this is welcome!

oderwat commented 8 years ago

I think "bit rot" is real. Just because of the failure rate exceeding the storage size for real huge hard drives. So statistically there will be a block error on a drive when it is big enough. But that probably has to be solved by the filesystem. On the other side borg is a backup system and should be able to recover from some disasters. Bup is using par2 (on demand) which seems to work for them.

anarcat commented 8 years ago

links to mentionned projects:

I think zfec is the most interesting project here for our purposes, because of its Python API. we could use it to store redundant copies of the segments' chunks and double-check that in borg check.

anyone can also already run par2 or zfec on the repository through the commandline to accomplish the demanded feature.

i am not sure snapraid is what we want.

so i would recommend adding par2 as a temporary solution to the FAQ and implementing zfec eventually directly in borg, especially if we can configure a redundant drive separately for backups.

tgharold commented 8 years ago

One option for those of us with extra disk space, would be to allow up to N copies of a chunk to be stored. This is a brute-force approach which would double or triple or quadruple the size of the repo depending on how many copies you allow.

Another idea is to allow repair of a repository by pulling good copies of damaged chunks from another directory. So if my primary backup repository happens to get corrupted, but I have an offline copy, I could mount that offline copy somewhere and have a repair function attempt to find good copies of damaged chunks from that directory.

PAR2 is nice, but maybe a bit slow. I don't know if PAR3 (supposedly faster) ever got off the ground.

enkore commented 8 years ago

FEC would make sense to protect against single/few bit errors (as opposed to a typical "many sectors / half the drive / entire drive gone" scenario). It would need to sit below encryption (since a single ciphertext bit flip potentially garbles an entire plaintext block). Implementing it on the Repository layer transparently applying to all on-disk values (and keys!) would make sense. Since check is already "local" (borg serve-side) it could rewrite all key-value pairs where FEC fixed errors. No RPC API changes required => forwards/backwards compatible change.

The C code from zfec looks small. If it doesn't need a whole lot of dependencies and the LICENSE permits it (and it, of course, being a good match to our requirements[1]) we could vendor it[2] if it's not commonly available.

[1]

[2] Vendoring should be a last-resort thing, since it more or less means that we take on all the responsibility upstream has or should have regarding packaging/bugs/testing etc.

ThomasWaldmann commented 8 years ago

If just a single or few bits flip in a disk sector / flash block, wouldn't that be either corrected by the device's ECC mechanisms (thus be no problem) or (if too many) lead to the device giving a read error and not giving out any data (thus resulting in much more than a few bit flips for the upper layers)?

enkore commented 8 years ago

Hard drive manufacturers always seemed to tell the "either it reads correct data or it won't read at all" story. Otoh: https://en.wikipedia.org/wiki/Data_corruption#SILENT

Somewhat related (but further discussion for separate issue) is how data integrity errors are handled. Currently borg extract will throw IntegrityError and abort, but that's not terribly helpful if it's just one corrupted file or chunk. Log (like borg create does IO errors) and exit 1/2 instead?

ThomasWaldmann commented 8 years ago

@enkore yes, aborting is not that helpful - open a separate ticket for it.

enkore commented 8 years ago

Hm, this would also be interesting for the chunk metadata. We could pass a restricted subset of the chunk metadata to the (untrusted) Repository layer, to tell it what's data and what's metadata[1]. That would allow the Repo layer to apply different strategies to them. E.g. have metadata with a much higher FEC ratio than data itself.

[1] Technically that leaks that information to something untrusted, but I'd say it's fairly easy from the access patterns and chunk sizes to tell (item) metadata and actual files apart. Specifically, item metadata is written in bursts and should be mostly chunks of ~16 kB. So if an attacker sees ~1 MB of consecutive 16 kB chunks, then that's item metadata.

dfloyd888 commented 8 years ago

This would be useful as part of a repository, in the config file. It would be nice to have a configurable option to allow for ECC metadata at a selected percentage. I know that one glitch or sync error with other backup programs can render an entire repository unreadable. I definitely hope this can pop up as a feature, as it is a hedge against bit rot for long term storage.

aljungberg commented 8 years ago

The FAQ recommends using a RAID system with redundant storage. The trouble there is that while such a system is geared towards recovering from a whole disk failing, it can't repair bit rot. For example consider a RAID mirror: a scrub can show that the disks disagree but it can't show which disk is right. Some NASes layer btrfs on top of that which in theory could be used to find out which drive is right (the one with the correct btrfs checksum) but at least my Synology NAS doesn't yet actually do that.

So any internal redundancy/parity support for Borg would be great, and even just docs on sensible ways to use 3rd party tools to get there would work too. Maybe it's as simple as running the par2 command line tool with an X% redundancy argument.

ThomasWaldmann commented 8 years ago

Correct, a mirror doesn't help for all cases. But for many cases, it for sure can.

The disk controller generates and stores own CRC / ECC codes additional to the user data and if it reports a bad sector on disk A while doing a scrub run, it can just take the sector from disk B and try to write it back to disk A (usually, write is successful and sector is repaired, otherwise disk is defect). So it is only the unfortunate case when a mirrored sector on both disks does not have same data, but the CRC / ECC error is not triggered on any of both disks (which is hopefully rather unlikely) or if both sectors give a CRC/ECC error.

Important is that scrub runs take place regularly, otherwise the corruption will be undetected and, if one is unlucky, a lot of errors creep in until one notices - and if both sides of the mirror get defect in same place, the data is lost.

Similar is the case for RAID5/6/10 arrays.

What still can happen is that the controller decides suddenly that more than the redundant number of disks are defect and kicks them out. Or that more disks go defect while the array is rebuilding. But that is a fundamental problem and you can't do much about it aside from having lots of redundancy to make this case unlikely.

zfs also has own checksums/hashes of the data also, btw (and is maybe more production-ready than btrfs).

FelixSchwarz commented 7 years ago

zfec is currently not compatible with Python 3 but there is a pull request. Also having a Python API is of course much nicer than calling out to a separate binary (+ zfec is supposed to be faster according to zfec's pypi page).

par2 on the other hand is likely present on more distros and the format seems to be widely used (other implementations/tools available). However the ideal solution for borg would use the error correction internally (otherwise encrypted repo would face quite a bit of storage overhead) so having external tools might not be that useful.

Even with good storage I'd like to see some (ideally configurable) redundancy in borg repos. Deduplication is great but I think it is more important that data is safe (even on crappy disk controllers).

Maybe a good task for 1.2?

enkore commented 7 years ago

Maybe a good task for 1.2?

1.2 has a set of defined major goals, since this would be a major effort it's unlikely.

ThomasWaldmann commented 7 years ago

@FelixSchwarz thanks for the pointer, I just reviewed that PR.

But as @enkore already pointed out, we rather won't extend 1.2 scope, there is already a lot to do.

Also, as I already pointed out above, I don't think we should implement EC in a way that might help for some cases, but also fails for a lot of cases. That might just give a false feeling of safety.

gour commented 7 years ago

Also, as I already pointed out above, I don't think we should implement EC in a way that might help for some cases, but also fails for a lot of cases.

Does it mean that EC won't be supported/implemented at all in/within Borg, or you're just considering what would be the proper way to do it?

enkore commented 7 years ago

There are a lot of options in that space and evaluating them is non trivial; fast implementations are rare as well. On a complexity scale I see this issue on about the level of a good master's thesis (=multiple man-months of work).

Note that a lot of the "obvious" choices and papers are meant for large object-storage systems and use blocked erasure coding (essentially: the equivalent of a RAID minus the problems of RAID for an arbitrary and variable amount of disk/shelf-level redundancy). This is, that we can say already, not an apt approach if you have only one disk.

ThomasWaldmann commented 7 years ago

@gour If we find a good way to implement it (that does not have the mentioned issues); I guess we would consider implementing it.

There is the quite fundamental issue that borg (as an application) might not have enough control / insight about where data is located (on disk, on flash).

Also, there are existing solutions (see top post), so we can just use them, right now, without implementing it within borg.

anarcat commented 7 years ago

just found out about this which might be interesting for this use case:

https://github.com/MarcoPon/SeqBox

enkore commented 7 years ago

zfec is not suitable here. It's a straight erasure code; say you set k=94, m=100, meaning you have 100 "shares" (output blocks) of which you need >=94 to recover the original data. Means 6 % redundancy, right? No! Those 94 shares must be pristine. The same is true for all "simple" erasure codes. They only handle erasure (removal) of shares, they do nothing about corrupted shares.

A PAR-like algorithm which handles corruption within shares, i.e. you have a certain percentage of redundancy and that percentage can be corrupted in any distribution across the output, is what we need here. (?)

Edit: Aha, PAR2 is not magic either. It splits the input into slices and uses relatively many checksummed blocks (hence its lower performance) which increase resistance against scattered corruption. So what appears at first like a share in PAR2 is actually not a share, but a collection of shares.

StefanBertels commented 7 years ago

What about the "low-tec" solution @tgharold mentioned like just doing backups to multiple repos and have some built-in way of accessing "secondary" repos as source for broken segments?

Setting up multiple backups on different hardware would help you when hardware fails or gets lost. If this setup is usable for bit rot, too, this is a plus.

ThomasWaldmann commented 7 years ago

@StefanBertels having stuff in multiple repos (at different places) is always a good idea.

It can't work with encrypted repos on chunk level as different encryption keys also means differently cut chunks and thus different chunk IDs. So while you could restore a bad file from the other repo, one could not just automatically fetch missing/defect chunks from the other repo.

enkore commented 7 years ago

This may be eventually possible with replication but to be honest borg check is such a complicated piece of code that adding this in the current state is pretty much guaranteed to make it completely unmaintainable.

spikebike commented 7 years ago

@enkore Right. Using zfec or similar codes normally you have a manifest which includes the checksum of each share. Then when trying to rebuild and error correct you only use the shared with the correct checksum. This works well in practice because checksum checking is quite fast (and the vast majority of the time all you need), but when in dire need you use the error correction from valid pieces.

Of course then you worry about the manifest, much like a file system superblock. Since it's small the usual answer is to make multiple copies.

ThomasWaldmann commented 7 years ago

@spikebike the manifest needs to be written at the end, so guess what would happen if you simply write it twice, after everything else?

It would maybe make you feel better, but it likely would end up on a closeby harddisk sector or in the same flash block, so if there is a defect, both copies might be affected.

Even if we would pre-allocate space at another time for the 2nd copy, it would be still just guessing and, depending on how fs and hardware works, it might not work like we want.

You usually need hardware or kernel/fs level control to safely do better and borg does not have that.

jvgreenaway commented 6 years ago

I think bup’s par2 integration is pretty great. It would be a big plus for borg to have similar integration/recommend application.

ThomasWaldmann commented 6 years ago

Until someone convinces me otherwise, i think that it is pointless to add error correction in borg (see some of my previous comments). Adding it might just create false hopes and could be perceived as a (false) promise.

One can either use lower levels to add redundancy or just do 2 backups to separate places.

ticpu commented 6 years ago

It would be possible to add an external par2 if one could ask borg if the segment is good before adding redundancy. An example at a very high level:

set -e
find data/ -type f -regex '^data/[0-9]+/[0-9]+$' | while read F
do
  check_if_segment_has_changed_since_last_backup || continue
  borg check --segment="$F" && \
  rm -f "${F}.*par2" && \
  par2 create -s16384 -c1 -n1 "${F}.par2" "$F"
done

$ du -shc {1..10} 2.1G $ du -shc *.par2 5.3M

Then add this after the backup job. Not asking to add this but wanted to add my 2 cents for a way to add redundancy on non-redundant FS. In this case, allow to recover 16384 bytes (4 "normal" sectors) from any segment, maybe tweak the numbers for SSD erase blocks which may be bigger.

jaxankey commented 6 years ago

Interesting discussion. Learning lots.

Perhaps a helpful tool would be an option to compare two repositories and list all conflicting files with the change history of each (borg diff gets close to this). Then the user could dig in and decide which to keep.

Usually the majority of my repo will not change, so if I see that a "static" file changed a week ago, I'll know immediately which has the error. Or I could manually inspect them and keep the one that looks right.

JonasOlson commented 6 years ago

use borg to have N (N>1) independent backup repos of your data on different targets (if N-1 targets get corrupt then, you have still 1 working left.

Functionality for using multiple backup servers at once (either containing identical data or complementing each other in some way) would be useful also for other reasons, such as being able to create and retrieve backups when one server is stolen or inaccessible. (This is assuming that you didn't mean just doing multiple backups independently of each other.)

igpit commented 5 years ago

+1 for some added ECC data

Relying on 100% correct bits returned feels scary and makes backups unusable which in some situations could be recovered easily, consider non-ECC/checksummed filesystems, DVD, ...

ofc you can check integrity but consider this is also a costly operation (even real money considering some cloud storage backends). Thus allowing for some random quirks along the way / in the backend makes you sleep much better, being able to make something out of your data even if a few places differ ;)

ThomasWaldmann commented 5 years ago

if it helps you sleep better: your HDD/SSD controller is already using ECCs for the stored data.

anarcat commented 5 years ago

if it helps you sleep better: your HDD/SSD controller is already using ECCs for the stored data.

That doesn't keep them from actually losing bytes, either through firmware, hardware, or human error. Otherwise we wouldn't need backups, would we? :)

I would have really appreciated to have error correction today. I have lost an entire repository after trying to split it with hardlinks (don't ask), and I suspect this might have helped recover some sense of a working state. Maybe not. But it definitely got me thinking about this problem again.

Borg and software like it are not like regular backup tools, where that one-off cosmic error makes you lose one file. When trouble hits, the entire dataset is in jeopardy. I first thought those were a rare occurrence, but the reality is that data loss and byte-flipping happens much more often than we'd like to think, and they happen more as storage density increases. I had trouble with faulty SD cards (or is it the filesystem implementation? who knows! it's not like those controllers are free software, so there's no way to tell), bit flips on HDDs and human errors. As explained in #3030, a single byte change can destroy an entire repository, with seemingly no chance of recovery (#3509, #3737).

So it would be great if this was at least considered as a solution, instead of punting the problem down to the hardware or up to the operator. Borg has a privileged view on its own data structures which implies it is uniquely positioned to implement such error correction. Yes, one can just use par2 on top of it, but if it's not part of the normal workflow, it will never happen, especially if issues like this say it's not necessary. :)

anarcat commented 5 years ago

i opened #4272 regarding the problem I encountered.

xenithorb commented 5 years ago

@ThomasWaldmann

One can either use lower levels to add redundancy or just do 2 backups to separate places.

So some questions about that - I'm currently doing 2 backups to separate locations - Let's call them repo 1.A and 1.B.

I know there are issues with duplicating repos, so I'm wondering if repo 1.A becomes corrupt, and you want to continue on with the history from 1.B how do you do that?

For example, coping 1.B (making it something like 1.B' to 1.A's old location won't work because the aforementioned repo-duplicating issues.

How do you de-couple a repo from itself so that it can continue on with a duplicate of itself in the wild at a different location?

If the answer is you can't - then that's probably an issue, as it means once repo 1.A or 1.B are corrupt, if you want to continue on with the same history, you're now stuck with a single repo. (And the only option would be for 1.A to start fresh after corruption). Is borg recreate useful here?

ThomasWaldmann commented 5 years ago

I assume you do not mean doing 2 borg backups to 2 different locations, but using rsync or similar tools to clone a borg repo to a different location. This is off-topic in this ticket and covered by the FAQ already.

xenithorb commented 5 years ago

No, I do mean 2 different, until one becomes corrupt and you want to continue history. Only after you're left with one corrupt does it becomes the "clone problem" because if you want to maintain the history you must clone the non-corrupt repo.

The point I'm trying to make here is that borg should have a feature to refactor a repo so that it can be used completely as an independent copy. That's not covered in the FAQ, beyond "don't do it."

If you're going to tell people that the best option is to have two or more independent repos as a solution to error correction / redundancy - It's not clear that the trade off is the ability to retain history should one become corrupt.

Real simple:

I backup to 1.A and 1.B for 10 days with a daily borg create script. 1.A becomes corrupt, so the only option you're left with to continue is:

I'm proposing that there should be a way to replace 1.A should it become corrupt, with 1.B' so:


After three days the first scenario would look like:

And the proposed solution scenario of decoupling repos:

Now extrapolate out to large history sets. My entire focus was about retaining redundancy, so I don't see it as being out-of-scope or off-topic.

The only current solution I can see is to make the redundant repos redundant! (i.e. maintaining copies of the copies), but that quadruples the storage requirement!

ThomasWaldmann commented 5 years ago

There is no "copy archive(s) from repo A to repo B" function yet (at least not without a full extract / create for each, which of course can always be done, but is expensive) - but there is a ticket about it.

It never gets super cheap/efficient anyway due to the different chunker secrets and encryption / mac keys used in different repositories, the best way to securely do that would just avoid extracting individual files to disk / reading individual files from disk).

xenithorb commented 5 years ago

I'm less concerned about copying archives between repos, more concerned with how to maintain redundancy if the project keeps recommending to maintain two independent repos and one becomes corrupt.

Consider that the thing I want to restore some day is in a very old archive and I'm stuck with running 1 remote and 1 local backup like the scenario above. The remote one becomes corrupt and unusable, and so if I want to continue to maintain backups from that point I must recreate the remote and it only has a history from that point.

Later, the local copy becomes corrupt because I fat fingered a command and deleted some stuff I shouldn't. Oops, now I can't restore the thing I didn't know I needed yet from that very old archive because it's gone, the second remote repository is no longer useful because the redundancy wasn't maintained.

I understand that having a feature to copy archives between repos could address this, but I'm hoping for ways to address that outside of borg, and perhaps an update in the documentation that informs users that they risk a redundancy downgrade if they lose a repo that's presently not recoverable from.

textshell commented 5 years ago

Currently the only safe way is to do a full copy of the repo that is still ok and never use that copy for any writing operations. And then begin a fresh repo to still keep redundancy. I don't think "copy archive(s) from repo A to repo B" is much harder to implement than "change internally used key to make writing to a cloned repo safe again" so i think what you are looking for in that case is the copy feature. Which is not rocket science but a decent chunk of work to implement.

jdchristensen commented 5 years ago

If you aren't using encryption, you can copy the repo and change the repo id in the config file. After that, you can use the repos independently. If the repo is encrypted, doing this introduces a security issue. I wonder if there could be a borg command to adjust the copy of the repo to make the security issue ("counter reuse") go away?

elho commented 5 years ago

I backup to 1.A and 1.B for 10 days with a daily borg create script. 1.A becomes corrupt, so the only option you're left with to continue is:

* `1.A` - Fresh recreated, no history. If `1.B` is corrupt you lose 10 days of history

* `1.B` - has 10 days of history

No, assuming the corruption of 1.A did not affect meta-data in a way rendering it unusable, the approach as I planned it out, should I face that situation, would be:

  1. borg check --repair backupserver:1.A, which will filll corrupted chunks with zeroes.
  2. Find the files affected by these replacement chunks: for archive in $(borg list --format '{barchive}{NEWLINE}' backupserver:1.A); do borg list --format "{health} ${archive} {path}{NEWLINE}" "backupserver:1.A::${archive}" | grep --invert-match '^healthy'; done (a correct approach would use {NUL} and xargs --null invoking a wrapper script to handle funky file names)
  3. Get all chunks resulting from the original data of these files back from 1.B into 1.A:borg mount the entire 1.B and do a create a (temporary) backup of all the files found in the previous step from this mount to 1.A. They will be extracted and rechunked, but all but the corrupt chunks will turn out to already be present on 1.A, thus only the ones that need replacing will be transferred.
  4. borg check --repair backupserver:1.A once again, which will now find correct new chunks for every zeroed replacement chunk and repair them all.
  5. borg delete the temporary archive holding the "replacement" files.
ThomasWaldmann commented 5 years ago

@elho sounds good, assuming that files in 1.A and 1.B archives are identical (like 2 backups made from same snapshot).

elho commented 5 years ago

Yes, backing up (and verifying) from snapshots only has become so natural to me, that I keep forgetting to mention it. Thanks for pointing out, that it is fundamentally important in this use case!

enkore commented 5 years ago

It's kinda funny how most longer tickets in this tracker seem to converge on a discussion about replication in Borg.

imperative commented 4 years ago

par2 is quite a mature, stable, popular (relatively), well-understood (relatively) solution with sane options (like variable redundancy and number of blocks, trading well between speed of generation and how small the blocks are (obviously smaller blocks will yield more flexible protection against different kinds of bitrot and/or corruption). I think it would work well as a layer on top of the repository files.

par2 is also somewhat of an explicit and external way of achieving redundancy. It generates .par2 files which the user, could then inspect and use manually, even applying other external programs to do the checking. I think that having this solution in this explicit way is a good idea (compared to for example zfec-based coding somewhere inside borg files, unseen to the user, uncheckable to them, unknown whether they exist or not) because it would create more flexibility against different types of possible corruption. (Whether it is bitrot inside of files, or filesystem failures resulting to complete files missing, etc.)

Incidentally this also goes well with the unix philosophy of having multiple small tools where each of them does one thing really well and then inter-operates with the other tools when needed. Par2 is THE project that does application-level (in external files) redundancy coding really well.

There are already system-level ways to make redundancy and error-correction work: zfs, btrfs, RAID, etc. But the application-level solution exist too, and they do so because of a demand, because it is not always possible to place the files on zfs, or otherwise manually administer the storage subsystem to such a degree that it would be possible, or sometimes the user needs to move the files between different systems, store them in other places (like tapes or offline backup drives or optical media). In all of these cases there is no universal system-level way to achieve recovery redundancy, and having it explicitly in Borg on the application level would be beneficial and provide real world utility.

dumblob commented 4 years ago

I hope I'm not too late to the party. I don't want to persuade @ThomasWaldmann nor say the solution "do backup to two different places" is wrong, but as some others have experienced, bit rot and other effects do real and very painful damage, so I'm also raising my hand for FEC (forward error correction).

I just want to point out that implementing FEC for hard disk storage needs more thought than demonstrated in this thread. For a quick intro, read this motivation behind creating the fork (changes to the original app).

Having that said ECC (or other redundancy in HDDs/SDDs/HW_RAIDs/SW_RAIDs/btrfs/other_filesystems), par2, zfec, snapraid, and oll others mentioned in the discussion are not enough and usually flawed at least in some regard (with the notable exception of ZFS RAID redundancy). I'll leave it as an exercise for the reader why e.g. such a "perfect" fs as btrfs does let corrupted data to be used without noticing or why the "battle tested" par2 can't recover data in some cases or how "cloud storage" providers do (not) ensure data integrity...

The above mentioned rsbep-backup (not the original rsbep) is the only tool somewhat satisfying the FEC requirements (it's quite slow despite being nearly the fastest R-S implementation I know about). Its minor downside is, that it still assumes one can find the approximate offset of the file, but that might be solved using some ideas from SeqBox.

One has to think really about "everything" - from the physical stuff (disk blocks, spanning errors, probabilities, ...) up to the "format" which has to guarantee, that e.g. also all metadata has the same recovery guarantees as the data itself as there is the same probability of errors for metadata as for data, etc.

About a year ago I did some investigation how other FEC backup software treats different errors and I was horrified by so many omissions and ignorance (actually I stayed with rsbep-backup from the linked github repo due to that). Please don't let Borg jump on that boat.

imperative commented 4 years ago

The above mentioned rsbep-backup (not the original rsbep) is the only tool somewhat satisfying the FEC requirements (it's quite slow despite being nearly the fastest R-S implementation I know about). [...]

Is this the one that writes this on its readme page? I quote:

I coded this using Python-FUSE in a couple of hours on a boring Sunday afternoon, so don't trust your bank account data with it... It's just a proof of concept (not to mention dog-slow).

Doesn't sound like a stable legit software that should be used for critical data and backup yet...

I'll leave it as an exercise for the reader why e.g. such a "perfect" fs as btrfs does let corrupted data to be used without noticing or why the "battle tested" par2 can't recover data in some cases

In which cases? Can you be more specific or link to those cases? If you have generated the par2 files, it basically guarantees that if the number of blocks corrupted is lower than the number of recovery blocks, it will recover all of the data. Are you talking about cases where the contents of the files is shifted or deleted, does it have problems in that case?

Are you saying that rsbep0.0.5 is somehow different in this regard? What functionality or advantage does it provide that par2 doesn't? Does rsbep protect from shifting contents? They both seem to use reed-solomon, both work in a similar way. Why use the more experimental one? The documentation on that one just describes the basic reed-solomon code and doesn't even mention par2, so it is unclear if author even knew about its existence.

PS btrfs is infamous for still being unstable and crashing every once in a while. There are very few people who would call it "perfect". Zfs on the other hand does not let one use bitrotten data without noticing.

hashbackup commented 4 years ago

I posted a few thoughts about this in the restic forum recently: https://github.com/restic/restic/issues/804

While it doesn't currently have ECC wrapping, HashBackup does support multiple destinations and can correct bad blocks in one destination using good blocks from another.

I've so far stayed away from doing ECC for reasons I mention in that post. But what did surprise me a bit is the probability of errors in a multi-TB situation; it was a lot higher than I thought, with numbers like 4% chance of a read error in a 5TB backup repo. I'm still not sure that the numbers are all correct, but still - 4% makes me a little uncomfortable when I know there are lots of people who write their HB backup to a USB drive and it's their only copy.

Copies are important, however you have to make them. IMO, making copies is a simpler strategy than relying on sophisticated and potentially unreliable ECC tech using a single copy. Adding ECC to a multiple-copy strategy might also make sense, as long as a problem in the ECC tech doesn't somehow propagate to all copies, making them useless.

VolatileCable commented 4 years ago

Just a heads-up for anyone considering using .par2 files, please don't use the outdated awful par2cmdline client that ships in many distros, but use parpar, which is significantly faster.

500MiB test file
multipar 19.76s 85-90% cpu
par2cmd  52.49s 92% cpu
par2cmd  49.42s (with 128MiB instead of default 16MiB mem)
parpar   11.22s

par2cmdline also had various other bugs and shortcomings in my testing. The only downside for parpar is that it only creates parity files, but can't verify or repair them.

tarruda commented 4 years ago

No, assuming the corruption of 1.A did not affect meta-data in a way rendering it unusable, the approach as I planned it out, should I face that situation, would be:

It should be easy to write a wrapper script that does the repairing automatically by using this solution. I will integrate it in the wrapper I already use for borg, thanks @elho.