Closed ubenmackin closed 4 years ago
Which versino of duperemove are you using if built from git, what is the HEAD commit used to build it? Also can you try the latest git master ?
It reports:
duperemove v0.12.dev
I did a git clone on 8/29, and built from those sources.
Ok, so your are using the latest and greatest, looking around csum_extent it seems the error you are getting is not from pread because the error string "Unable to read file ..." should have been printed and you indicates that's not the case. Can you make a fresh checkout of the code and add a print statement to check if the return value of csum_blocks in csum_extent is negative?
Another thing to check - perhahps the files are too big and they overflow - can you tell me what is their size?
If it happens again, I’ll give it a try.
I ended up copying the files to another sub volume, deleting the original, and then copying them back. This seems to have “fixed” the issue. So maybe it was an issue with the files?
Most of the files are 16GB. One is 128GB. But the 128GB file, is in a folder with a bunch of other files that seemed to be ok. The files are DD images of various systems that I backup.
A few other notes, I use btrfs and make use of compression (zstd). I’m on the latest kernel (5.8.5) on CentOS.
Yeah unfortunately I can't tell what was going on wrong. Initially I thought it could have been a simple arithmetic overflow but that's not possible since it would imply an extent over 2g which neither xfs nor btrfs support. So I'm closing this issue and if you experience this just open a new issue.
@ubenmackin this should be fixed now, if you git pull & build again from master
While running duperemove, some of my files get an error similar to the below:
Out of hundreds of files, about 15 get an error like this. Not sure what it means, but reporting it here incase it can be of help.