Closed tengzhuofei closed 3 years ago
Hello @tengzhuofei,
I'm not sure your backup strategy follows one of the listed one in mongodb documentation, so you may be experiencing issues with the restoration because of it. From your logs, it seems to me that the .wt
files are not the expected ones and shouldn't be related to data corruption.
this error don't be related to data corruption?
Failed to salvage WiredTiger metadata","attr":{"details":"-31809: WT_TRY_SALVAGE: database corruption detected
{"t":{"$date":"2021-01-27T02:49:18.421+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":0,"message":"[1611715758:421756][101:0x7f227cb99100], file:WiredTigerHS.wt, hs_access: __wt_block_read_off, 283: WiredTigerHS.wt: read checksum error for 4096B block at offset 32768: block header checksum of 0xb0966b0a doesn't match expected checksum of 0xa35cb238"}}
Hello @tengzhuofei,
You are right in that it is showing that data is corrupted. What I meant is that mongo may be throwing that error as a general one because the backup strategy of hard linking the data directory is not a supported one. Apart from that, I'm not certain about where this problem may be coming from. Did you get the same error following one of the suggested backup strategies or backing up your data with a third-party program?
In my new mongo shard environment , the amount of data is not very large. Copying files through hard links can be restored to the new cluster.
Hello @tengzhuofei,
Could you share the steps you followed to backup and restore your data and your mongodb sharded configuration? It would be really helpful for reproducing the same use-case.
I use https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded,and running 3 configsvr,2 mongos,3 shard and 2 replicaset and 1 arbiter. I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data. because my secondary has journaling enabled and its journal and data files are on the same volume,so I am not top all write operations.But I stop balance. the uncompressed size of the data is 170Gi.I copy the entire /bitnami/mongodb-sharded with per a pod,using the same replica set keys in new cluster.
ok,I have to stop all second node write
Hi, Thanks for providing feedback. I suppose that the guide worked for you and I can proceed to close this issue, isn't it ?
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
I am procceding to close this issue. Please don't hesitate to reopen it if needed.
mongodb bitnami/mongodb-sharded:4.4.1-debian-10-r60
I backed up the data by hard linking the data directory of the master node of the configuration and data pod and copying the files, and then there was a problem in restoring the data.
using the same replica set keys
mongod-2-0.log mongod-2-1.log
Using mongod --repair still cannot fix the problem
mongodb.log