bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

[mongodb-sharded] Ability to accept configsvr primary host string with multiple host names #7981

Closed sudheersagi closed 2 years ago

sudheersagi commented 2 years ago

Which chart: mongodb-sharded , version - 3.9.14

Is your feature request related to a problem? Please describe. Our requirement is to deploy only mongos in our application cluster and want to connect to external config server (outside cluster, which is centralised for multiple services).In the chart values.yaml, we are asked to provided primary config svr host (configsvr ->external -> host) to accomplish desired architecture . Here my configsvr is replicaset(3 nodes, 1 primary and 2 secondary) which is fail safe like if the primary goes down one of the secondary will be elected as primary.

If i defined primary host with comma separated connection string like "replicatset1/cfg1-primary:27019,cfg2-sec:27019,cfg3-sec:27019" it is throwing an error "cannot resolve host".

It is allowing to define only single host name "host1-primary" which fills my mongos.conf with "replicatset1/cfg1-primary:27019" under sharding->configDB.

But if the primary host is down and secondary become primary then the host name changes.In this case do i need to re-deploy mongos with updated config primary host name every-time the primary node goes off.

Please let us know if the format is already supported by the chart by some other way.

Note: We are first time working on mongodb sharded. Please point if i missed anything.Thanks for your help.

Describe the solution you'd like

Existing code.

mongodb_sharded_set_cfg_server_host_conf() {
    local -r conf_file_path="${1:-$MONGODB_MONGOS_CONF_FILE}"
    local -r conf_file_name="${conf_file_path#"$MONGODB_CONF_DIR"}"

    if ! mongodb_is_file_external "$conf_file_name"; then
        mongodb_config_apply_regex "configDB:.*" "configDB: $MONGODB_CFG_REPLICA_SET_NAME/$MONGODB_CFG_PRIMARY_HOST:$MONGODB_PORT_NUMBER" "$conf_file_path"
    else
        debug "$conf_file_name mounted. Skipping setting config server host"
    fi
}

i think above code should be changed to also support format given in official mongo documentation

sharding:
  configDB: <configReplSetName>/cfg1.example.net:27019, cfg2.example.net:27019,...

Example : Currently it shows in mongos.conf

sharding:
  configDB: relicatset1/cfg1-primary:27019

Expecting to support

sharding:
  configDB: relicatset1/cfg1-primary:27019,cfg2-sec:27019,cfg3-sec:27019

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Additional context I'm running this on AWS EKS.

helm version

version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.16"}

Kubectl version

Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
yilmi commented 2 years ago

@sudheersagi, I feel it is a duplicate of https://github.com/bitnami/charts/issues/7988. Could you confirm please?

Thanks

sudheersagi commented 2 years ago

Thanks @yilmi for looking into it.

the issue is even if i defined configDB with replica hosts separated with comma

sharding:
      configDB: cfgrs0/configsvr0.example.com:27019,configsvr2.example.com:27019,configsvr1.example.com:27019

still in /opt/bitnami/mongodb/conf/mongos.conf config is updated with single host.

sharding:
  configDB: cfgrs0/configsvr0.example.com:27017

Does it takes care of primary node fail over scenario even if it is with single host? (when configsvr0.example.com is down)

yilmi commented 2 years ago

Ok thanks for the confirmation, this sounds like a left over volume from the helm install. As mentioned in https://github.com/bitnami/charts/issues/7988#issuecomment-956224016, this is a very common issue. Could you do a kubectl delete pvc and redeploy only using the supported values?

yilmi commented 2 years ago

Hi @sudheersagi, I missed it in your initial description, but this feature request would be for the container repo here - https://github.com/bitnami/bitnami-docker-mongodb-sharded

You can perhaps open a feature request for multiple configsvr and also, for a custom configsvr port. The fact that you can't set the port to 27019 it's because it is hardcoded in the container environment variable MONGODB_DEFAULT_PORT_NUMBER.

I'm closing this one for now as this repo is for the charts part, thanks!

yilmi commented 2 years ago

I'm keeping this issue opened as there will be some changes required on the chart as well.

sudheersagi commented 2 years ago

Hi @yilmi ,

You can perhaps open a feature request for multiple configsvr and also, for a custom configsvr port Do i have to open separate feature request on container repo or should the same ticket is enough to fulfil request? https://github.com/bitnami/bitnami-docker-mongodb-sharded

carrodher commented 2 years ago

Hi, if you are going to create a Pull Request (PR) it should be created in the repo where the code is, in this case, if the changes affect the container it should be created at https://github.com/bitnami/bitnami-docker-mongodb-sharded; on the other hand, if the changes are affecting the Helm chart, it can be created in this repository.

sudheersagi commented 2 years ago

Thanks @carrodher, it requires both container and chart changes, will open feature request at container repo.

carrodher commented 2 years ago

Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.