Closed F43RY closed 1 year ago
Hi @F43RY, I believe your issues could be happening due to lack of quorum. 3 nodes might not be enough to properly execute this scenario. You could add some more, plus some arbiters, to ensure that there will always be quorum.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Name and Version
bitnami/mongodb-sharded-5.0.8
What steps will reproduce the bug?
deploy a default cluster with 3 cfg srv, 1 mongos, 1 shardsvr PSA. Deleting disk of cfg-0 and restart pod a fresh installation starts although cfg-1 and cfg-2 were working with a primary and secondary.
Are you using any custom parameters or values?
No response
What is the expected behavior?
The cfg-0 starts in the existing replicaset in startup2 -> secondary -> primary
What do you see instead?
new replicaset starts with same configuration of the existing one so 2 identical replicaset coexist. replicaset-1=(cfg-0), replicaset-2= (cfg-1, cfg-2).
Additional information
What is the procedure to join item-0 of a replicaset to its existing replicaset after a disk failure? No response