markfasheh / duperemove

Tools for deduping file systems
GNU General Public License v2.0
816 stars 81 forks source link

Skipping file due to error -2013265920 #241

Closed ubenmackin closed 4 years ago

ubenmackin commented 4 years ago

While running duperemove, some of my files get an error similar to the below:

Error -2013265920 from csum_extent()
Skipping file due to error -2013265920 from function csum_by_extent (Unknown error -2013265920), /mnt2/backup/images/rp1/rp1_22.dd

Out of hundreds of files, about 15 get an error like this. Not sure what it means, but reporting it here incase it can be of help.

lorddoskias commented 4 years ago

Which versino of duperemove are you using if built from git, what is the HEAD commit used to build it? Also can you try the latest git master ?

ubenmackin commented 4 years ago

It reports:

duperemove v0.12.dev

I did a git clone on 8/29, and built from those sources.

lorddoskias commented 4 years ago

Ok, so your are using the latest and greatest, looking around csum_extent it seems the error you are getting is not from pread because the error string "Unable to read file ..." should have been printed and you indicates that's not the case. Can you make a fresh checkout of the code and add a print statement to check if the return value of csum_blocks in csum_extent is negative?

Another thing to check - perhahps the files are too big and they overflow - can you tell me what is their size?

ubenmackin commented 4 years ago

If it happens again, I’ll give it a try.

I ended up copying the files to another sub volume, deleting the original, and then copying them back. This seems to have “fixed” the issue. So maybe it was an issue with the files?

Most of the files are 16GB. One is 128GB. But the 128GB file, is in a folder with a bunch of other files that seemed to be ok. The files are DD images of various systems that I backup.

A few other notes, I use btrfs and make use of compression (zstd). I’m on the latest kernel (5.8.5) on CentOS.

lorddoskias commented 4 years ago

Yeah unfortunately I can't tell what was going on wrong. Initially I thought it could have been a simple arithmetic overflow but that's not possible since it would imply an extent over 2g which neither xfs nor btrfs support. So I'm closing this issue and if you experience this just open a new issue.

ericzinnikas commented 4 years ago

@ubenmackin this should be fixed now, if you git pull & build again from master