Closed ggn06awu closed 7 years ago
Hm, sounds weird. I've been using this for months without such issues.
@foxish Any ideas?
@unguiculus turned out two of the 5 worker nodes appeared healthy but actually had a docker bridge issue (not lifting the correct subnet from flannel's configuration). DNS resolution was subsequently timing out and the replicaSet wasn't able to resolve the other expected members. Closing accordingly, thanks for looking.
Hi @ggn06awu , i am having similar issue, can you help me what need's to be corrected ? thanks in advance
I am also facing the same issue! In my case, all 3 replicaset created but without valid replica set config
"ismaster" : false,
"secondary" : false,
"info" : "Does not have a valid replica set config",
"isreplicaset" : true,
Having the same issue.
Simple running:
cfg = rs.config()
rs.reconfig(cfg, {force:true})
Gets the cluster back up again, but clearly this isn't a fix.
I'm also able to reproduce the error. If a do a serious of db.shutdownServer()
across my clusters.
i changed network driver from flannel to calico (on centOS) worked fine for me
i changed network driver from flannel to calico (on centOS) worked fine for me
Thanks, this was my problem in Azure AKS v 1.19.9. Switching (from Azure CNI) to calico while creating a new AKS cluster resolved my replica set config problem:
rs.status(); resulted in: -> errmsg: "no replset config has been received"
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report
When running the helm install (see command below) Kubernetes is spinning up 3 pods successfully, but the replicaSet isn't forming. The behaviour is erratic, sometimes i've had 2/3 nodes join the replicaSet, sometimes I've had none join, sometimes I get two 2 distinct primary's and one reading "Does not have a valid replica set config" (as in the case in this report):
Install Command
helm install -f values.yaml --name hub-dev-mongo stable/mongodb-replicaset
values.yaml
Mongo isMaster output
Version of Helm and Kubernetes:
Which chart:
stable/mongodb-replicaset
What happened:
Installed the chart, three pods are created and deployed to three members of the cluster. Not all members join the replicaset, sometimes two will, usually none at all (by running sequential installs and then deletes + purges). It feels like a race condition behaviour.
Here is the rs.conf() on each:
Deleting a pod doesn't seem to help.
What you expected to happen:
All members, once started, join the nominated replicaSet.
How to reproduce it (as minimally and precisely as possible): Run the install command, sometimes 2 members join the replicaSet, usually none.
Anything else we need to know: This is a bare metal install of kubernetes on Core OS. The only noteworthy aspect of this set up is I'm using nfs-client storage to generate PVC requests to an external NAS (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client). Perhaps a race condition around that and its performance (seems fast enough...)?
Not sure if that's relevant, volume claims do work fine on other applications.