cvallance / mongo-k8s-sidecar

Kubernetes sidecar for Mongo
MIT License
440 stars 298 forks source link

(Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017]) - After scaling up #117

Open davorceman opened 4 years ago

davorceman commented 4 years ago

I created cluster with two replicas. And at the end it works correctly. Now I increment replicas from 2 to 3 and I'm getting only on that one sidecar this error.

> mongo-k8s-sidecar@0.1.0 start /opt/cvallance/mongo-k8s-sidecar > forever src/index.js

warn: --minUptime not set. Defaulting to: 1000ms warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms Using mongo port: 27017 Starting up mongo-k8s-sidecar The cluster domain 'cluster.local' was successfully verified. Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017] at Pool. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35) at Pool.emit (events.js:182:13) at Connection. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12) at Object.onceWrapper (events.js:273:13) at Connection.emit (events.js:182:13) at Socket. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:189:49) at Object.onceWrapper (events.js:273:13) at Socket.emit (events.js:182:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) name: 'MongoError', message: 'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' }

But when I check sidecar in mongo-0 I see that this third is also added, and cluster working. I connected to it and check, data is replicated on it

Addresses to add: [ 'mongo-2.mongo.mongo.svc.cluster.local:27017' ] Addresses to remove: [] replSetReconfig { _id: 'rs1', version: 4, protocolVersion: 1, members: [ { _id: 0, host: 'mongo-0.mongo.mongo.svc.cluster.local:27017', arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: 'mongo-1.mongo.mongo.svc.cluster.local:27017', arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: 'mongo-2.mongo.mongo.svc.cluster.local:27017' } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: 5ea8507d4645a422737db5e2 } }

What this error means? I struggled with it these days when I tried to find a way to set it. This is my configuration

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: default-view
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: default
    namespace: {{ .Values.Namespace }}

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: {{ .Values.Namespace }}
  labels:
    role: mongo
    environment: {{ .Values.Environment }}
spec:
  ports:
  - port: {{ .Values.Mongo.Port }}
    targetPort: {{ .Values.Mongo.Port }}
  clusterIP: None
  selector:
    role: mongo
    environment: {{ .Values.Environment }}

---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
  namespace: {{ .Values.Namespace }}
spec:
  serviceName: mongo
  replicas: {{ .Values.Replicas}}
  template:
    metadata:
      labels:
        role: mongo
        environment: {{ .Values.Environment }}
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: {{ .Values.Mongo.Image }}
          command:
            - mongod
            - "--replSet"
            - rs1
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: {{ .Values.Mongo.Port }}
              protocol: TCP
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment={{ .Values.Environment }}"
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: "mongo"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: aws-efs
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

mongo image is 3.4 because I was not able to set it to the newest. k8s is 1.14 on EKS

igor9silva commented 3 years ago

Hi @davorceman, I'm facing similar problem. Did you ever found a solution?

xu756 commented 1 year ago

Hi @davorceman, I'm facing similar problem. Did you ever found a solution?

me too

davorceman commented 1 year ago

Hi @igor9silva @xu756

idk... If I remember corectly It worked even with this error. I did not found a solution since we went to DocumentDB just after.

ARu1ToT commented 1 year ago

Hi @davorceman ,I'm facing the same problem with you.But currently our mongodb working well.Did you found a way to solve it currently?Thanks.