opendedup / sdfs

Deduplication Based Filesystem
375 stars 79 forks source link

recoverability and reliability issue. #125

Closed richman1000000 closed 2 years ago

richman1000000 commented 3 years ago

i'm concern with SDFS recoverability . I made several test and SFDS is not recovering. Currently there is no way to recover SDFS volume Here is my test ============= prepare: 1) create SDFS volume 2) make a copy of file (example debian-10.10.0-amd64-xfce-CD-1.iso) 3) make a checksum of file (example debian-10.10.0-amd64-xfce-CD-1.iso, SHA256: 24FEE00ED402C4A82CFEC535870AB2359EC12A7DD4EED89C98FD582BC7CF3B25)

============= corrupt and then check 1) REMOVE one of the files from chunk storage in SDFS backend folder (i had /test/chunkstore/chunks/395/3951259684649167869 ) 2) run sdfs check 3) try to read file. You will get error in log files. Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: java.io.IOException: not able to fetch hashmap for 3951259684649167869 Result: This error is expected since volume is corrupted. But there is must be a way to continue further use of same volume..

============= Recovery attempt 1) Make a new copy of (debian-10.10.0-amd64-xfce-CD-1.iso) 2) calculate checksum of new copy file in SDFS volume (SHA256: BB913069D28CF23188E93812B7AE41A752B37A33FF726BA69B05E6C4BF33960D). Result: new copy is broken. Expected: After volume check I expected NEW files on the same volume to be intact.

richman1000000 commented 2 years ago

124

richman1000000 commented 2 years ago

I there any help on this? we need a tool to recover SDFS volume in case of errors!!!

richman1000000 commented 2 years ago

sooo. Project died today?