restic / restic

Fast, secure, efficient backup program
https://restic.net
BSD 2-Clause "Simplified" License
26.49k stars 1.56k forks source link

Ignore disappeared source files #3098

Open gadall opened 3 years ago

gadall commented 3 years ago

Output of restic version

restic 0.11.0 compiled with go1.15.3 on windows/amd64

What should restic do differently? Which functionality do you think we should add?

Make an option to exit with 0 (zero) when the only errors were source files not found. Also, in the message "failed to read all source data during backup", "all" could perhaps be changed to "some".

What are you trying to do? What problem would this solve?

I'm running the backup over a large directory hierarchy within which temporary files occasionally appear briefly. I'm getting error alerts from the wrapper script running restic, when it seems a temporary file is found during the scan but gone by the time restic tries to back it up.

Did restic help you today? Did it make you happy in any way?

It is my sunshine. My only sunshine.

rawtaz commented 3 years ago

@gadall In case you don't know already, this was changed in #2546, see under "Change #2546: Return exit code 3 when failing to backup all source data" in https://github.com/restic/restic/blob/master/CHANGELOG.md for reference.

Restic returns a specific exit code for the case where it failed to read all/some of the files you told it to back up, namely exit code 3. You can use that to identify this condition, altering your script to silently ignore this (if you really want to).

In the long run a proper solution to backing up files in a transient filesystem is to use snapshots. E.g. LVM, ZFS and BTRFS has snapshot functionality, and in Windows you have VSS which is supported by restic since version 0.11.0.

Given that there are proper ways to not have files disappear during backup runs, and that you can identify when this happens (as opposed to any other type of error) if it were to happen, it's unlikely that we'll add the option you request.

gadall commented 3 years ago

@rawtaz if !success { return ErrInvalidSourceData } Does this actually cover only file not found, and no other read errors? Other errors might be worth paying attention to. A file simply not existing anymore, can only be considered an error outside of a backup utility's scope, if at all. We didn't delete it, why it was deleted is none of our concern. We're certainly not going to back it up, since it doesn't exist. We can move on. Yes, a snapshot would help here. However I was thinking that can be a heavier-weight solution for harder problems to solve, such as files being modified on the fly, or being unreadable for whatever weird reasons under Windows,

gadall commented 3 years ago

OK, granted, of course a consistent snapshot of an entire filesystem can sometimes be needed, where existence or non-existence needs to be tracked atomically across multiple files. I mean to say that there would also be situations where this wouldn't be a concern, and this can be ignored without bothering with a filesystem snapshot.

rawtaz commented 3 years ago

if !success { return ErrInvalidSourceData } Does this actually cover only file not found, and no other read errors?

If e.g. a file in a folder you asked restic to back up is unreadable due to permissions, this is something that will result in exit code 3. I imagine that e.g. an antivirus program refusing access to a file would be the same, exit code 3.

Really, the task for restic is to back up the files you ask it to back up. If it can't do that, it lets you know using the exit code. I think I understand what you're saying, but the use case where you would back up files that you are fine with one or even all of them not existing, is too narrow. We try to build restic to really assure you that it succeeded in doing what it was meant to do. If you want to deviate from that, the correct way to do so is to use snapshots to get a consistent set of files to back up.

I agree it might feel overkill if you expect just a few files to possibly disappear during the backup run, but if you go down the rabbit hole of saying "it's fine if files disappear", you're setting yourself up for nasty surprises that we want to avoid. You could think your backups run perfectly file while in reality 80% of your backup set was lost somehow. Not saying this will happen, but it's a risk we don't want to support.

rawtaz commented 3 years ago

Out of curiosity, what files are disappearing for you, and if you expect them to be transient and not important, shouldn't they be excluded in the first place?

gadall commented 3 years ago

If exit code 3 is for all read errors then it's, well, for all read errors, no? Like IO errors and such?

We're talking about a situation where I'm asking restic to back up, say, /var or E:\ and, as you say, 0.1% or 80% or 90% of what is below that disappeared between the initial scan and completion of the backup. The question is, does restic have to care. If I started the backup job a moment later, then from restic's point of view, everything is fine, no? Is restic now a monitoring tool that is tasked with alerting me when my free disk space suddenly increases by a large margin? The key here is perhaps a distinction between what I actually explicitly asked restic to back up, and files discovered by recursion. No, I do not have a clear idea at the moment for how to properly resolve this. I guess, if this was to be an option not enabled by default, then /var becoming completely empty (suddenly unmounted?) would be accepted.

The actual case here is/was an .ldb lock file for an .mdb MS Access file. Yes, perhaps it could be excluded, but there are / will be other briefly existing temporary / lock file.

rawtaz commented 3 years ago

I'm not sure what else I can add to this discussion :)

You already have three different ways to solve this in a safer and arguably more proper way than to tell restic that "the files I give you, it's fine if they don't all back up". I disagree with your view that restic should not care about making sure that all files included in the backup set you give it (including recursively discovered files).

If your current use case is those lock files and similarly specific paths, it should be rather straight forward to exclude them using a pattern given to one of the --exclude* options. Being that specific is much safer than introducing the option you suggest, IMO.

gadall commented 3 years ago

"your view that restic should not care", is not a view about restic, and what it should always do, it's advocating for a use case. The authors of rsync thought to create a dedicated exit code 24 for vanished files, and the --ignore-missing-args option for specifically-mentioned files. Would you say that down the road you will find that restic's use cases will never cross paths with what rsync gets used for? Yes, you're right, there are a number of ways for me to get around this specific issue I'm seeing, but I can't help but think that if restic implemented an option to ignore, or treat differently or return a unique exit code for vanished files, I and others might end up actually using it from time to time. And that's pretty much it, just an opinion, we can indeed consider this conversation concluded if you please.

rawtaz commented 3 years ago

Rsync's --ignore-missing-args would not help you with your use case. It silences errors about paths that 1) you gave on the command line or similar, and 2) does not exist when rsync first/initially scans the files to sync. Files that it found during scan, but that went missing after the scan and before it tries to sync them, will still be warned about (effectively the same as restic's current behavior). To quote man rsync: "This does not affect subsequent vanished-file errors if a file was initially found to be present and later is no longer there". Feel free to correct me if I am wrong :)

I respect your use case, it's valid to you, but I think that out of the four suggested solutions, adding an --ignore-missing parameter is the least good. Put another way, as long as there are better solutions, that are safer in the long run (as in, minimizes the risk that someone later find that files they thought were backed up were really not backed up), then those solutions should be used and/or exhausted before resorting to options like suggested in this issue, which will introduce uncertainty and also encourage not looking into what went wrong and possibly adjusting one's backup parameters to make sure backups run as they are intended and that they will warn about unexpected situations.

To put it extremely simply: If we add that type of option (which again, in your use case is IMO not really needed as there are plenty of safer ways to do it, in particular the exclude options), it's just a matter of time before a user loses some important files because of a mistake in the use of this option. I sure don't want to support that. I'm sorry that this isn't in line with your opinion. I hope adding a couple of exclude patterns solves your use case nicely.

I'm not going to close this, in case a ton of people think differently than I do. Let's see.

gadall commented 3 years ago

OK. Leave it open.

I did read the man page before quoting it here. Don't worry. My entire point in citing --ignore-missing-args is that it is a "stronger" option, in reference to your concept of what the user "asked" restic to back up. I do not know what is the use case of this option, but it seemed relevant to the thought process you seem to convey here, so I made a reference to it. Perhaps the idea is to allow you to write a "speculative" invocation of rsync which would catch files if they are there. Whatever.

There is also exit code 24.

And it looks like rsync isn't afraid of implementing options in case someone god forbid wants to use them. You're taking a "responsible" approach.

fd0 commented 3 years ago

Thanks for your suggestion to add an option and a dedicated exit code.

I'd like to add something here:

when it seems a temporary file is found during the scan but gone by the time restic tries to back it up.

The scan phase is there just to determine how much data restic will need to save, there's not state kept for the backup phase. So the files restic complains about would need to be removed between reading a directory and saving the files therein, this time should be rather short.

And it looks like rsync isn't afraid of implementing options in case someone god forbid wants to use them. You're taking a "responsible" approach.

It feels to me that you're very frustrated, and I understand that. Sarcasm helps nobody though. 😃

We (as a project) are trying to keep restic's complexity at bay, and this means carefully balancing requests for new features and options against the complexity and the maintenance cost it'll add to restic. What @rawtaz did was suggesting how you could solve your issue with existing options restic already has. In my opinion, we should not add a dedicated option for this issue.

gadall commented 3 years ago

the files restic complains about would need to be removed between reading a directory and saving the files therein, this time should be rather short.

Didn't know that. Thank for the clarification. It does then seem to be a rather unlikely event except for very large directories (of which I see plenty).

It feels to me that you're very frustrated, and I understand that. Sarcasm helps nobody though. smiley

Not frustrated at all. Sorry if anyone felt any negativity coming their way. I rather meant that as a literal remark. Newer programs by younger programmers tend to follow a different philosophy. The old school approach was to make powerful and yes, dangerous programs, like rm. Programs were meant to behave as designed, not as desired by the users or their employers. That approach has its merits. I'm not that old myself, I just see the merits.

We (as a project) are trying to keep restic's complexity at bay, and this means carefully balancing requests for new features and options against the complexity and the maintenance cost it'll add to restic. What @rawtaz did was suggesting how you could solve your issue with existing options restic already has. In my opinion, we should not add a dedicated option for this issue.

Yup, no problem. I was thinking more in terms of how this option might be useful, broadly speaking.

fd0 commented 3 years ago

The old school approach was to make powerful and yes, dangerous programs, like rm. Programs were meant to behave as designed, not as desired by the users or their employers. That approach has its merits. I'm not that old myself, I just see the merits.

It also has many downsides, complexity (implementation as well as usage) and feature creep. :)

gadall commented 3 years ago

Adding lots of features is synonymous with feature creep, and GNU is initials for Add Every Feature You Can Imagine. That's not the same thing. The idea of making programs and writing man pages that make sense only in very technical terms and will be seen as friendly only by users who have acquired very technical thinking is exactly programmer-friendly. The idea is that if we're not going to try to pretend that a computer is anything but a golem, code complexity can be drastically reduced and the program's behavior will be much simpler and more predictable. Such low-complexity programs are indeed dangerous by people who decide whether to feel frustrated or not only when the outcome has already happened. Programs who will always aim for and hit the user's true intention are perhaps what general AI is supposed to be about? Anyway, this is genuine chit chat at this point. Not on or even off of any particular topic :)

mirekphd commented 2 years ago

One example of the "disappearing files" are config files stored by MinIO (inside of one of the source folders) when using MinIO (also) in the source data folder (to be uploaded by restic to a remote location, in my case also confusingly to another MinIO bucket). This removal of the config files is required in MinIO to reset / change credentials (and is a workaround widely used, see e.g. https://github.com/bitnami/bitnami-docker-minio/issues/6#issuecomment-629411449 and https://github.com/truenas/middleware/commit/516d705af2ca72b20312bf084a65ad987f572280).

To illustrate - restic v0.12.1 log before interruption of a scheduled nightly restic backup:

Save(<data/50d18b55de>) returned error, retrying after 552.330144ms: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/73d2322f-6b5e-41a0-becd-3e8b718d1b2b/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 1.080381816s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/63e4b083-d40b-494c-bae9-3babb3665d18/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 1.31013006s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/eaa58d6d-fc08-41b8-9482-912d8eda78da/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 1.582392691s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/a9e48dbc-a9e5-4c5b-8bc4-1b139fca1d72/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 2.340488664s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/f53982fa-2c8d-4159-a927-00783bce4c47/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 4.506218855s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/ca11a197-8ee8-4668-91d8-06a2cc4f896a/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 3.221479586s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/b8f1eb5c-e8b1-4806-9e4e-be87ea6a04f4/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 5.608623477s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/9921e4f7-e30d-4efc-ba6f-10cfbb611cf7/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 7.649837917s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/623ac0e0-9807-4377-b8da-d4182bdf0210/fs.json: operation not permitted)
Save(<data/50d18b55de>) returned error, retrying after 15.394871241s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/5cba493b-0578-406a-9a1d-a92cb603249b/fs.json: operation not permitted)
Fatal: unable to save snapshot: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/c45c6098aa634d81373cb64f735f24919fe20d27b01607b2bafd83edd0f2f6ee/682d1cc4-6e6e-4449-b340-845d3e4bd7ea/fs.json: operation not permitted)

Out of curiosity, what files are disappearing for you, and if you expect them to be transient and not important, shouldn't they be excluded in the first place?

MichaelEischer commented 2 years ago

@mirekphd The log snippet shows that restic is unable to upload data but not that the backup command is unable to read certain source files.

mirekphd commented 2 years ago

Yes, but the problem stems from the /data/.minio.sys/* folder no longer existing at source (while being backed up to the remote destination during one of the previous nightly backups):

Apparently in the source location the .minio.sys folder complete with its entire contents has been moved up one level (to the parent folder of /data, i.e. to /home/jovyan), which seems to confuse restic backup:

$ ls -lant /home/jovyan/data/.minio.sys*
ls: /home/jovyan/data/.minio.sys*: No such file or directory

$ ls -lant /home/jovyan/.minio.sys*
total 16
drwxr-xr-x   21 65534    65534         4096 May 14 14:13 buckets
drwxr-x---   37 65534    65534         4096 May 14 01:22 ..
drwxr-xr-x   11 65534    65534         4096 May 14 01:18 tmp
drwxr-xr-x    6 65534    65534          106 Jul 24  2021 .
drwxr-xr-x    3 65534    65534           48 Jul 24  2021 config
-rw-------    1 65534    65534           94 May 23  2020 format.json
drwxr-xr-x    2 65534    65534           10 May 23  2020 multipart

The log snippet shows that restic is unable to upload data but not that the backup command is unable to read certain source files.

mirekphd commented 2 years ago

The problems are compounded by the --exclude-file not working in this case...: I run: restic backup --no-cache --repo $BACKUP_FOLDER_OR_URL --exclude-file=$EXCLUSIONS_FILE --limit-upload $UPLOAD_LIMIT --verbose $DATA_FOLDER_TO_BACKUP with EXCLUSIONS_FILE pointing to this text file:

$ cat /tmp/restic-excludes.txt 
# restic files and folders exclusions file                                                  
# (see: https://restic.readthedocs.io/en/stable/040_backup.html#including-and-excluding-files)

# excude go-files                        
# *.go                                   

# exclude foo/x/y/z/bar foo/x/bar foo/bar  
# foo/**/bar                               

# exclude "disappearing" MinIO config files 
# (see: https://github.com/restic/restic/issues/3098#issuecomment-1126681515)              
*/.minio.sys/*
.minio.sys
data/.minio.sys
/data/.minio.sys
data/.minio.sys/*

... but still go the error.

MichaelEischer commented 2 years ago

Yes, but the problem stems from the /data/.minio.sys/* folder no longer existing at source (while being backed up to the remote destination during one of the previous nightly backups):

This issue is about problems which cause log errors like the following: error: lstat /home/user/Maildir/dovecot-uidlist.tmp: no such file or directory. The log output you've shown so far shows that the repository storage location is not writeable (MinIO itself complains about a file in .minio.sys being inaccessible, not restic!), which will always cause restic to fail the backup run and is totally unrelated to this issue. So please create a new issue if you still think this is a problem in restic.

mirekphd commented 2 years ago

The OP wrote: "I'm running the backup over a large directory hierarchy within which temporary files occasionally appear briefly. ".

And so did my /data/.minio.sys/* at its original source location, from which it was somehow moved (maybe by the bitnami/minio container startup script?) to another location, thus:

/home/jovyan/data/.minio.sys* --> /home/jovyan/.minio.sys*

so for restic backup the path from the original location has simply "disappeared". But you are right that I should not have brought the other issue here (of --exclude-file not working in this case, despite my attempts shown above), for which I apologize. Still, I have no workaround due to the latter issue, which should arguably increase the priority of patching the original bug.

mirekphd commented 2 years ago

It may be also a bug in MinIO or even a hardware issue... sadly MinIO devs have a bad habit of closing unresolved issues: https://github.com/minio/minio/issues/10827

As a workaround I will split this 1.5 TB user folder into several subfolders to reduce: time, size and complexity (number of files) on the MinIO remote bucket side and get back here and in a new MinIO issue if it solved the problem. Currently the problem appears after 30 minutes which corresponds to just above 1 TB of transferred data.

Later I will also prepare a reproducible example to file a report for the --exclude-file not working for .minio.sys (if it still persists).

mirekphd commented 2 years ago

I've switched to the latest version of minio from minio/minio:latest (which meant a major version change from 2021 to 2022). I have simultaneously changed target node to the same where source files are located (to rule out any hardware issues). Sadly, it did not help:

Save(<data/3aed83da3d>) returned error, retrying after 552.330144ms: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/4cf02f27-dce4-436c-95fd-d25a91f33f7a/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 1.080381816s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/ca3530cc-eaf9-4b35-9b2d-eb62ddcf24bc/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 1.31013006s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc5
eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/7bccc9f5-a856-4a8c-ac19-cb19374c054e/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 1.582392691s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/bdd9c20a-5392-4f4a-b9bb-31576c1c8ef0/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 2.340488664s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/d074aae3-2032-4cbd-ace7-d3c1ec863af9/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 4.506218855s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/24d52ef5-0fd0-4ed6-a131-d1d3ed6c7db3/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 3.221479586s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/b49e657c-d62b-43a8-9b38-ce6abbe6148d/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 5.608623477s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/b6749c98-0887-448f-82aa-4ff516c4a228/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 7.649837917s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc
5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/8dc9d8d1-9591-46fa-81ba-e46140c2bca1/fs.json: operation not permitted)
Save(<data/3aed83da3d>) returned error, retrying after 15.394871241s: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702b
c5eed9b6c384402a30a5003d8920939423eb8f0c60c422043a35e475c338/18568668-4b53-4711-aab1-46b0505e71d7/fs.json: operation not permitted)
Fatal: unable to save snapshot: client.PutObject: We encountered an internal error, please try again.: cause(open /data/.minio.sys/multipart/702bc5eed9b6c384402a30a5003d8920939423eb8f
0c60c422043a35e475c338/b2c9d0cc-1239-41fb-afcf-8f7020950063/fs.json: operation not permitted)
mirekphd commented 2 years ago

While testing these large backups I've also run into other bugs (let me just collect them here, even if they merit opening separate Issues):

I will be trying to add 7z before restic and see if it helps with the main blocker and the other issues.

MichaelEischer commented 2 years ago

The OP wrote: "I'm running the backup over a large directory hierarchy within which temporary files occasionally appear briefly. "

This issue is exclusively about warnings in the log output of restic, it CANNOT IN ANY WAY CAUSE YOUR BACKUP TO FAIL!.

While testing these large backups I've also run into other bugs (let me just collect them here, even if they merit opening separate Issues):

* `restic backup` never finds existing snapshots, despite not using any wildcards (backing up folders like `/home/jovyan`):
  (https://forum.restic.net/t/restic-never-finds-a-parent-snapshot/3938/12)

Please open a new issue if you think this is a bug in restic or ask in the forum if you need help using restic correctly. As you mentioned that you are using containers, my guess would be that each container uses a random hostname such that the snapshots are regarded as independent.

* `restic backup` ignores the `--no-cache` option and writes data anyway inside the container, thus using all available server memory (with swap + RAM use typically equal to the amount of data to be cached:
  (others reporting heavy memory usage: https://news.ycombinator.com/item?id=25186310)

No it does not ignore that option.

* the compression factor in `restic backup` is rather low, compared to e.g. `7z` (up to 2:1 vs 10:1 for CSV files),

restic only deduplicates but does not compress data so far, see #3666.

I will be trying to add 7z before restic and see if it helps with the main blocker and the other issues.

That won't fix your Minio cluster problems.

I'm not going to reply anymore in this issue to your questions. So either open a new issue of thread in the forum if you need further assistance.

hirak99 commented 8 months ago

If one of the files is not found, does it sill continue to backup all the other files?

It appears to do so, because from my logs when it failed with exit code 3 and this warning, it did create a repository at the exact same time.

In that case the message is merely informational and exit code 3 can be checked and safely ignored if desired. This is preferable, I merely am trying to confirm that it is the case.

rawtaz commented 6 months ago

If one of the files is not found, does it sill continue to backup all the other files?

Yes, it tries to back up all the files that were in the backup set, so it will continue to process the other files :)

bfontaine commented 5 months ago

Is there any suggested solution for this issue? I’m running Restic as part as a backup script written in Bash using set -e, so any non-zero exit code makes the script stop. I have servers using ext4 so the snapshot suggestion doesn’t work, and I’m not sure what are the "proper ways to not have files disappear during backup runs" given that I’m running this on servers with live applications that I can't put down. I guess I have to somehow wrap the command and check if its exists with a non-zero-and-also-non-three code but that seems ugly.