opendedup / sdfs

Deduplication Based Filesystem
375 stars 79 forks source link

error during copy of file using rsync #127

Closed richman1000000 closed 2 years ago

richman1000000 commented 3 years ago

during copy of file 20GB size onto sdfs volume rsync stops. in sdfs log is see this error.

2021-10-06 08:30:43,065 [FATAL] [sdfs] [org.opendedup.sdfs.io.SparseDedupFile] [587] [pool-7-thread-3] - unable to add chunk at position 229376 java.io.IOException: Unable to write retried 11 at org.opendedup.sdfs.io.SparseDedupFile.writeCache(SparseDedupFile.java:554) at org.opendedup.sdfs.io.WritableCacheBuffer.close(WritableCacheBuffer.java:945) at org.opendedup.sdfs.io.WritableCacheBuffer.run(WritableCacheBuffer.java:1285) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) 2021-10-06 08:30:43,235 [WARN] [sdfs] [org.opendedup.sdfs.io.SparseDedupFile] [556] [Thread-27] - Data was not all inserted, will retry 2021-10-06 08:30:43,379 [WARN] [sdfs] [org.opendedup.sdfs.io.WritableCacheBuffer] [959] [pool-7-thread-3] - unable to close 229376 java.io.IOException: java.io.IOException: Unable to write retried 11 at org.opendedup.sdfs.io.SparseDedupFile.writeCache(SparseDedupFile.java:589) at org.opendedup.sdfs.io.WritableCacheBuffer.close(WritableCacheBuffer.java:945) at org.opendedup.sdfs.io.WritableCacheBuffer.run(WritableCacheBuffer.java:1285) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.io.IOException: Unable to write retried 11 at org.opendedup.sdfs.io.SparseDedupFile.writeCache(SparseDedupFile.java:554) ... 5 more 2021-10-06 08:30:43,379 [WARN] [sdfs] [org.opendedup.sdfs.io.SparseDedupFile] [556] [Thread-27] - Data was not all inserted, will retry

richman1000000 commented 2 years ago

Hi! I was able to find simple way to reproduce this issue!. run command 10 times inside SDFS volume on any directory "echo " $(date +%Y%m%d%H%M%S) start" >> move_time.time This will instantly cause on error image