opendedup / sdfs

Deduplication Based Filesystem
375 stars 79 forks source link

broken file for SDFS backend #124

Closed richman1000000 closed 2 years ago

richman1000000 commented 3 years ago

the file /bacdedup/d10/chunkstore/chunks/342/3429453820666736172 is broken How can I continue next copies of duplicates without completely recreating SDFS volume?

2021-06-18 15:10:08,901 [WARN] [sdfs] [org.opendedup.collections.RocksDBMap] [987] [pool-5-thread-10]  -  miss for [fae87e5bd5b668efeec9702b635735d3] [3429453820666736172]
2021-06-18 15:10:08,950 [WARN] [sdfs] [org.opendedup.collections.RocksDBMap] [991] [pool-5-thread-10]  -  miss for [fae87e5bd5b668efeec9702b635735d3] [3429453820666736172] found at [3429453820666736172]
2021-06-18 15:10:08,950 [ERROR] [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [1901] [pool-5-thread-10]  - unable to read at 0 0 flen 0 file=/bacdedup/d10/chunkstore/chunks/342/3429453820666736172 openFiles 96
java.util.concurrent.ExecutionException: java.io.IOException: java.io.IOException: not able to fetch hashmap for 3429453820666736172
        at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:566)
        at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:527)
        at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:104)
        at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:240)
        at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2313)
        at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2279)
        at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155)
        at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045)
        at com.google.common.cache.LocalCache.get(LocalCache.java:3962)
        at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985)
        at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946)
        at org.opendedup.sdfs.filestore.HashBlobArchive.getChunk(HashBlobArchive.java:1599)
        at org.opendedup.sdfs.filestore.HashBlobArchive.getBlock(HashBlobArchive.java:940)
        at org.opendedup.sdfs.filestore.BatchFileChunkStore.getChunk(BatchFileChunkStore.java:120)
        at org.opendedup.sdfs.filestore.ChunkData.getChunk(ChunkData.java:201)
        at org.opendedup.collections.RocksDBMap.getData(RocksDBMap.java:993)
        at org.opendedup.sdfs.filestore.HashStore.getHashChunk(HashStore.java:231)
        at org.opendedup.sdfs.servers.HashChunkService.fetchChunk(HashChunkService.java:143)
        at org.opendedup.sdfs.servers.HCServiceProxy.fetchChunk(HCServiceProxy.java:283)
        at org.opendedup.sdfs.io.WritableCacheBuffer$Shard.run(WritableCacheBuffer.java:1251)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.IOException: java.io.IOException: not able to fetch hashmap for 3429453820666736172
        at org.opendedup.sdfs.filestore.HashBlobArchive$5.load(HashBlobArchive.java:445)
        at org.opendedup.sdfs.filestore.HashBlobArchive$5.load(HashBlobArchive.java:440)
        at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529)
        at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278)
        ... 17 more
Caused by: java.io.IOException: not able to fetch hashmap for 3429453820666736172
        at org.opendedup.sdfs.filestore.HashBlobArchive.getRawMap(HashBlobArchive.java:599)
        at org.opendedup.sdfs.filestore.HashBlobArchive.access$200(HashBlobArchive.java:75)
        at org.opendedup.sdfs.filestore.HashBlobArchive$5.load(HashBlobArchive.java:443)
        ... 20 more