Open geek-merlin opened 3 years ago
There could be many lock files, each with a timestamp and explicit expiration. An agent that is holding the lock would be supposed to write out a new lock file before the previous one expires. When done it would finish with a final lock file where the expiration == aquiring, to signal that it's done. These lock files would accumulate, but could safely be removed when possible. To avoid race conditions, agents should first write out their lock file, and then double check if their file is indeed the highest one (after certain, small but reasonably safe period).
Hey, this sounds reasonable. Ovyiously you're much more into locking than me.
One strategy to protect backups from ransomware (think an evil software that not only encrypts local files, but deletes any cloud storage it can get grip of) is to have "no delete" mode for cloud storage backup account. (you can still have another account that can delete, for pruning from another machine. Or add eternally. ;-)
This should be no problem with chunk and name files. But i wonder what happens wrt locking? The backup process adds a lock file, backs up happily, but in the end can't delete it, and the next backup will fail?
Some links that show this is a much wanted feature and people are discussing that for many backup frameworks: