bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

Connect MongoDB replicaset from outside kubernetes cluster. #5292

Closed GioPat closed 3 years ago

GioPat commented 3 years ago

Which chart: MongoDB - version 10.5.2

Describe the issue I am not able to connect to the MongoDB replicaset from outside the cluster using the following connection command: mongo "mongodb://<external_ip_address_of_lb_service_1>:27017,<external_ip_address_of_lb_service_2>:27017/?replicaSet=rs0" -u root

After the prompt of the password input and waiting ~30 seconds I get this error: MongoServerSelectionError: getaddrinfo ENOTFOUND mongodb-0.mongodb-headless.mongodb.svc.cluster.local

I need this connection to manage the cluster for development purposes.

I've found the official MongoDB guide and it is specifying a connectivity.replicaSetHorizons[] option but it's not quite clear how this is used.

However, if I use the following connection string: mongo "mongodb://a8d44d7be4b4047e4998e882673ed0c3-1156864585.eu-central-1.elb.amazonaws.com:27017" -u root the session is correctly open.

When I installed the helm chart I received the following instructions for connecting from outside the cluster:

To connect to your database nodes from outside, you need to add both primary and secondary nodes hostnames/IPs to your Mongo client. To obtain them, follow the instructions below:

  NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
        Watch the status with: 'kubectl get svc --namespace mongodb -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/component=mongodb,pod" -w'

    MongoDB nodes domain: You will have a different external IP for each MongoDB node. You can get the list of external IPs using the command below:

        echo "$(kubectl get svc --namespace mongodb -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/component=mongodb,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"

    MongoDB nodes port: 27017

It's not quite clear what "you need to add both primary and secondary nodes hostnames/IPs to your Mongo client." means.

To Reproduce Steps to reproduce the behavior:

  1. Install bitnami helm chart
  2. Change the following values: 2.1 architecture=replicaset 2.2 externalAccess.enabled=true 2.3 autoDiscovery.enabled=true
  3. Wait the cluster to assign an external ip
  4. Connect using the commands above

Expected behavior I'd expect to see the first command working.

Version of Helm and Kubernetes:

Helm version

version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
percenuage commented 3 years ago

Hello @GioPat , I'm using Mongodb Helm chart with replicaset on GKE. I'm using this mongo uri: mongo "mongodb://root:<root_password>@<lb_ip_1>:27017,<lb_ip_2>:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred".

It works for me.

My values:

fullnameOverride: db

replicaCount: 2

architecture: replicaset

auth:
  enabled: true

arbiter:
  enabled: false

persistence:
  size: 2Gi

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: true

serviceAccount:
  create: true

rbac:
  create: true

metrics:
  enabled: false

pdb:
  create: false

resources:
  requests:
    memory: 512Mi
  limits:
    memory: 512Mi

PS: Maybe you should try a fresh install and remove all your mongo pvc.

GioPat commented 3 years ago

Hello @GioPat , I'm using Mongodb Helm chart with replicaset on GKE. I'm using this mongo uri: mongo "mongodb://root:<root_password>@<lb_ip_1>:27017,<lb_ip_2>:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred".

It works for me.

My values:

fullnameOverride: db

replicaCount: 2

architecture: replicaset

auth:
  enabled: true

arbiter:
  enabled: false

persistence:
  size: 2Gi

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: true

serviceAccount:
  create: true

rbac:
  create: true

metrics:
  enabled: false

pdb:
  create: false

resources:
  requests:
    memory: 512Mi
  limits:
    memory: 512Mi

PS: Maybe you should try a fresh install and remove all your mongo pvc.

Hello @percenuage, after deleting entirely the namespace and freshly reinstalling the helm chart with few options of yours it is working without problems. What could have caused the issue?

Thank you very much.

percenuage commented 3 years ago

Hello @GioPat , I'm using Mongodb Helm chart with replicaset on GKE. I'm using this mongo uri: mongo "mongodb://root:<root_password>@<lb_ip_1>:27017,<lb_ip_2>:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred".

It works for me.

My values:

fullnameOverride: db

replicaCount: 2

architecture: replicaset

auth:
  enabled: true

arbiter:
  enabled: false

persistence:
  size: 2Gi

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: true

serviceAccount:
  create: true

rbac:
  create: true

metrics:
  enabled: false

pdb:
  create: false

resources:
  requests:
    memory: 512Mi
  limits:
    memory: 512Mi

PS: Maybe you should try a fresh install and remove all your mongo pvc.

Hello @percenuage, after deleting entirely the namespace and freshly reinstalling the helm chart with few options of yours it is working without problems. What could have caused the issue?

Thank you very much.

I had the same issue today and I found this fix in another GitHub issue. I think it's because Mongodb keep the first configuration on the PVC. As RS will not delete PVC on uninstall release, you should delete it manually.

soft-sysops commented 3 years ago

Hello,

Did someone tries to use this with tls: true ?

Best

migruiz4 commented 3 years ago

Hi @GioPat and @percenuage,

We are looking into this issue, I will get back to you with any update.

Sorry for the inconvenience.

percenuage commented 3 years ago

Hello,

Did someone tries to use this with tls: true ?

Best

Hello, yes and I have warnings in the log that cause CrashLoopBackOff (Readiness probe failed) ^^

image

soft-sysops commented 3 years ago

@percenuage

I have tried with tls: true it is working with mongo shell, so I can remotely access my replicaset via my custom domain But can't connect with mongo compass

percenuage commented 3 years ago

@percenuage

I have tried with tls: true it is working with mongo shell, so I can remotely access my replicaset via my custom domain But can't connect with mongo compass

Good! Could you share your Helm values plz? And your mongo client uri? I would like to try on my side :)

soft25 commented 3 years ago

Hello,

Yes, sorry for my late answer

values.yaml:

`

  .........

 tls:
   enabled: true

   image:
     registry: docker.io
     repository: bitnami/nginx
     tag: 1.19.5-debian-10-r19
     pullPolicy: IfNotPresent

  .........

 externalAccess:
   enabled: true

   autoDiscovery:
     enabled: true

     image:
       registry: docker.io
       repository: bitnami/kubectl
       tag: 1.18.13-debian-10-r5

`

templates/statefulset: `

    ........

    DNS.6 = mongo0.customDomain
    DNS.7 = mongo1.customDomain
    DNS.8 = mongo2.customDomain
    #Or just DNS.6 = *.customDomain

    ........

`

javsalgar commented 3 years ago

Hi,

Just my 2 cents. I tried it in my side and was unable to reproduce the issue. Let's see if @percenuage was able to reproduce it.

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.