bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.03k stars 9.22k forks source link

externalAccess fails without warning, connecting within cluster is confusing #4925

Closed dave-yotta closed 3 years ago

dave-yotta commented 3 years ago

Edit 2: Ok - I've digged into how statefulsets interact with services (https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id) and it comes down to being available on <pod-name>.<service-name>.default.svc.cluster.local within the cluster. I've got external access off - enabling it requires either specifying a hostname/ip that is valid for all replica members and external access routes - by default it assumes the public internet ip is unique per replica member and accessable, which isn't true unless those pods have been assigned public internet ips. Maybe some docs? Much stress - sorry once again though!

Edit: Changed title - sorry guys, I don't see how this can work and it's causing me a lot of pain editing and debugging your images and charts:

Which chart: mongodb 10.3.1

Describe the bug Seems you don't initialise the replicaset, so I do this myself. On CI where we use microk8s - and we're doing a microk8s.reset --destroy-storage even though it's supposed to be a fresh machine 🙄 (possible dirty machines from azure devops) - it's failing because rs.status says it's not got config, but rs.config says it's "Already Initialised" - it did work "the first time" I ran this...

To Reproduce Steps to reproduce the behavior:

  1. Use microk8s, install chart and initialise replicaset once
  2. uninstall chart and reset microk8s - confirm no pv/pvc
  3. install chart again, cannot initialise replica set anymore as described

Expected behavior Should be able to initialise each time without issue.

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"clean", GoVersion:"go1.14.13"}
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean", BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.0-37+7ab055a22f5440", GitCommit:"7ab055a22f5440dbdcd1d41095b7db95d98fc1c3", GitTreeState:"clean", BuildDate:"2020-12-10T18:59:05Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
carrodher commented 3 years ago

In order to reproduce the issue on our side, can you share with us the steps you are following when you say?

Seems you don't initialise the replicaset, so I do this myself

In order to use replicaset as the architecture, I'm installing the chart by running

$ helm install my-mongo --set mongodb.architecture="replicaset" bitnami/mongodb

then I am able to install it again without any issues.

Are you able to reproduce the issue if the second installation is done with a different name or even in a different namespace?

dave-yotta commented 3 years ago

Sure in more detail, we've got the mongodb chart as a subchart/dependency of ours, and configuring in our values file like this:

# bitnami mongodb values
mongodb:
  fullnameOverride: local-mongo # from bitnami common
  architecture: replicaset
  replicaCount: 3
  replicaSetName: rs1
  arbiter:
    enabled: false
  externalAccess:
    enabled: true,
    service:
      type: NodePort
      nodePorts:
        - 30001
        - 30002
        - 30003
  image:
    tag: 4.2
  auth:
    enabled: false
  resources:
    limits:
      cpu: 100m
      memory: 400Mi
    requests:
      cpu: 100m
      memory: 400Mi

We also set off a job that runs a script to initialise the replicaset - I had observed that there was no rs.initiate done by bitnami as far as I can tell:

apiVersion: batch/v1
kind: Job
metadata:
  name: initialize-replica-set
spec:
  template:
    metadata:
      labels:
        app: initialize-replica-set
    spec:
      containers:
      - name: initialize-replica-set
        image: mongo:4.2
        command: [/bin/bash, -c]
        args: ["
            for host in 0 1 2; do
                stdbuf -o0 printf \"checking for member $host\";
                while ! mongo local-mongo-$host-external:27017 --eval \"db.version()\" > /dev/null 2>&1; do
                    sleep 1;
                    stdbuf -o0 printf .;
                done;
                echo ok;
            done;

            mongo local-mongo-0-external:27017 --eval \"rs.status()\";
            mongo local-mongo-1-external:27017 --eval \"rs.status()\";
            mongo local-mongo-2-external:27017 --eval \"rs.status()\";

            echo running mongo local-mongo-0-external:27017 --eval \"rs.initiate({_id: \\\"rs1\\\", members: [ { _id: 0, host: \\\"local-mongo-0-external\\\"}, { _id: 1, host: \\\"local-mongo-1-external\\\"}, { _id: 2, host: \\\"local-mongo-2-external\\\"} ] })\";
            mongo local-mongo-0-external:27017 --eval \"rs.initiate({_id: \\\"rs1\\\", members: [ { _id: 0, host: \\\"local-mongo-0-external\\\"}, { _id: 1, host: \\\"local-mongo-1-external\\\"}, { _id: 2, host: \\\"local-mongo-2-external\\\"} ] })\";

            mongo local-mongo-0-external:27017 --eval \"rs.status()\";
            mongo local-mongo-1-external:27017 --eval \"rs.status()\";
            mongo local-mongo-2-external:27017 --eval \"rs.status()\";

            stdbuf -o0 printf \"checking for replication OK\";
            while ! mongo local-mongo-0-external:27017 --eval \"'SETOKVALUE='+ rs.status().ok\" | grep \"SETOKVALUE=1\" > /dev/null 2>&1; do
                sleep 1;
                stdbuf -o0 printf .;
            done;
            echo ok;

            mongo local-mongo-0-external:27017 --eval \"db.serverStatus()\";
            mongo local-mongo-0-external:27017 --eval \"rs.status()\";
        "]
      restartPolicy: Never
  backoffLimit: 0

Can you confirm if bitnami does infact do a rs.initiate command? A race condition here might explain what's going on - but as I've said, running this chart locally I've never seen the replication initialized even waiting for a long time.

Yes I can try randomizing the name/namespace - will let you know.

carrodher commented 3 years ago

Thanks for the info.

Can you confirm if bitnami does infact do a rs.initiate command? A race condition here might explain what's going on - but as I've said, running this chart locally I've never seen the replication initialized even waiting for a long time.

Yes, there is a logic to do the initialization in the primary.

In the container repo, you can find the following function:

########################
# Get if primary node is initialized
# Globals:
#   MONGODB_*
# Arguments:
#   $1 - node
# Returns:
#   None
#########################
mongodb_is_primary_node_initiated() {
    local node="${1:?node is required}"
    local result
    result=$(mongodb_execute "root" "$MONGODB_ROOT_PASSWORD" "admin" "127.0.0.1" "$MONGODB_PORT_NUMBER" <<EOF
rs.initiate({"_id":"$MONGODB_REPLICA_SET_NAME", "members":[{"_id":0,"host":"$node:$MONGODB_PORT_NUMBER","priority":5}]})
EOF
)

    # Code 23 is considered OK
    # It indicates that the node is already initialized
    if grep -q "\"code\" : 23" <<< "$result"; then
        warn "Node already initialized."
        return 0
    fi
    grep -q "\"ok\" : 1" <<< "$result"
}

That is executed here:

########################
# Configure primary node
# Globals:
#   MONGODB_*
# Arguments:
#   $1 - node
# Returns:
#   None
#########################
mongodb_configure_primary() {
    local -r node="${1:?node is required}"

    info "Configuring MongoDB primary node"
    wait-for-port --timeout 360 "$MONGODB_PORT_NUMBER"

    if ! retry_while "mongodb_is_primary_node_initiated $node" "$MONGODB_MAX_TIMEOUT"; then
        error "MongoDB primary node failed to get configured"
        exit 1
    fi
}

In turn, this is executed when the node is primary, see this function during the initialization under certain conditions: 1) $MONGODB_DATA_DIR/db is empty and 2) replicaset is enabled, see this function.

In the end, this thread of functions starts in the setup.sh that should be always executed as part of the entrypoint.sh

Can you check if there are some of the previous conditions not matching in your scenario?

dave-yotta commented 3 years ago

Perhaps I'm less familiar with the rs.initiate call - but it seems you are not adding the secondaries to the members array in your call, I don't believe that will work?

FYI here is the slightly perplexing "I'm not configured - I'm already configured - I'm not configured" timeline I'm seeing:

2021-01-08T17:17:25.1872271Z checking for member 0.................................ok
2021-01-08T17:17:25.1873096Z checking for member 1...............................ok
2021-01-08T17:17:25.1873471Z checking for member 2.............................ok
2021-01-08T17:17:25.1873759Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1874898Z connecting to: mongodb://local-mongo-0-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1876953Z Implicit session: session { "id" : UUID("bced6a5b-d4b8-4b47-8af2-0901a566de40") }
2021-01-08T17:17:25.1877422Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1877682Z {
2021-01-08T17:17:25.1877922Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1878203Z    "ok" : 0,
2021-01-08T17:17:25.1878495Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1878807Z    "code" : 94,
2021-01-08T17:17:25.1879084Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1879360Z    "$clusterTime" : {
2021-01-08T17:17:25.1879653Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1879931Z        "signature" : {
2021-01-08T17:17:25.1880256Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1880593Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1880820Z        }
2021-01-08T17:17:25.1881010Z    }
2021-01-08T17:17:25.1881180Z }
2021-01-08T17:17:25.1881413Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1882191Z connecting to: mongodb://local-mongo-1-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1882980Z Implicit session: session { "id" : UUID("71986e7c-2493-4fa8-b0c3-994607c4c5b6") }
2021-01-08T17:17:25.1883384Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1883609Z {
2021-01-08T17:17:25.1883856Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1884340Z    "ok" : 0,
2021-01-08T17:17:25.1884623Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1884929Z    "code" : 94,
2021-01-08T17:17:25.1885189Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1885486Z    "$clusterTime" : {
2021-01-08T17:17:25.1885783Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1886051Z        "signature" : {
2021-01-08T17:17:25.1886377Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1886697Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1886942Z        }
2021-01-08T17:17:25.1887117Z    }
2021-01-08T17:17:25.1888352Z }
2021-01-08T17:17:25.1888600Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1889374Z connecting to: mongodb://local-mongo-2-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1890174Z Implicit session: session { "id" : UUID("3eaab702-b973-483c-bf63-95ed3427a93e") }
2021-01-08T17:17:25.1890576Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1890837Z {
2021-01-08T17:17:25.1891089Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1891351Z    "ok" : 0,
2021-01-08T17:17:25.1891648Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1891938Z    "code" : 94,
2021-01-08T17:17:25.1892217Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1892636Z    "$clusterTime" : {
2021-01-08T17:17:25.1892913Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1893206Z        "signature" : {
2021-01-08T17:17:25.1893513Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1893851Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1894092Z        }
2021-01-08T17:17:25.1894268Z    }
2021-01-08T17:17:25.1894457Z }
2021-01-08T17:17:25.1895476Z running mongo local-mongo-0-external:27017 --eval rs.initiate({_id: "rs1", members: [ { _id: 0, host: "local-mongo-0-external"}, { _id: 1, host: "local-mongo-1-external"}, { _id: 2, host: "local-mongo-2-external"} ] })
2021-01-08T17:17:25.1896295Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1898744Z connecting to: mongodb://local-mongo-0-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1899655Z Implicit session: session { "id" : UUID("9323ea10-addf-430e-b8c5-6f8f7de8057e") }
2021-01-08T17:17:25.1900065Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1900342Z {
2021-01-08T17:17:25.1900599Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1900890Z    "ok" : 0,
2021-01-08T17:17:25.1901163Z    "errmsg" : "already initialized",
2021-01-08T17:17:25.1901458Z    "code" : 23,
2021-01-08T17:17:25.1901907Z    "codeName" : "AlreadyInitialized",
2021-01-08T17:17:25.1902212Z    "$clusterTime" : {
2021-01-08T17:17:25.1902502Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1902802Z        "signature" : {
2021-01-08T17:17:25.1903260Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1903628Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1903890Z        }
2021-01-08T17:17:25.1904096Z    }
2021-01-08T17:17:25.1904311Z }
2021-01-08T17:17:25.1904549Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1905344Z connecting to: mongodb://local-mongo-0-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1905987Z Implicit session: session { "id" : UUID("5ef3b881-a454-4919-8f7d-6928e5060c17") }
2021-01-08T17:17:25.1906335Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1906594Z {
2021-01-08T17:17:25.1906811Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1907065Z    "ok" : 0,
2021-01-08T17:17:25.1907494Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1907796Z    "code" : 94,
2021-01-08T17:17:25.1908084Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1908371Z    "$clusterTime" : {
2021-01-08T17:17:25.1908675Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1908974Z        "signature" : {
2021-01-08T17:17:25.1909490Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1909878Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1910159Z        }
2021-01-08T17:17:25.1910409Z    }
2021-01-08T17:17:25.1910775Z }
2021-01-08T17:17:25.1911060Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1911814Z connecting to: mongodb://local-mongo-1-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1912634Z Implicit session: session { "id" : UUID("297b9a57-12fe-48af-b4a1-5ad3fff60161") }
2021-01-08T17:17:25.1913285Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1913596Z {
2021-01-08T17:17:25.1914822Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1915229Z    "ok" : 0,
2021-01-08T17:17:25.1915592Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1915978Z    "code" : 94,
2021-01-08T17:17:25.1916314Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1916685Z    "$clusterTime" : {
2021-01-08T17:17:25.1917059Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1917407Z        "signature" : {
2021-01-08T17:17:25.1917832Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1918230Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1918555Z        }
2021-01-08T17:17:25.1918824Z    }
2021-01-08T17:17:25.1919072Z }
2021-01-08T17:17:25.1920313Z MongoDB shell version v4.2.11
2021-01-08T17:17:25.1921380Z connecting to: mongodb://local-mongo-2-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-08T17:17:25.1922605Z Implicit session: session { "id" : UUID("e3b9477f-8158-4d45-8fe5-ca6c10eca4af") }
2021-01-08T17:17:25.1923147Z MongoDB server version: 4.2.11
2021-01-08T17:17:25.1923461Z {
2021-01-08T17:17:25.1923793Z    "operationTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1924133Z    "ok" : 0,
2021-01-08T17:17:25.1924597Z    "errmsg" : "no replset config has been received",
2021-01-08T17:17:25.1924980Z    "code" : 94,
2021-01-08T17:17:25.1925294Z    "codeName" : "NotYetInitialized",
2021-01-08T17:17:25.1925637Z    "$clusterTime" : {
2021-01-08T17:17:25.1925966Z        "clusterTime" : Timestamp(0, 0),
2021-01-08T17:17:25.1926306Z        "signature" : {
2021-01-08T17:17:25.1928621Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-08T17:17:25.1929216Z            "keyId" : NumberLong(0)
2021-01-08T17:17:25.1929546Z        }
2021-01-08T17:17:25.1929791Z    }
2021-01-08T17:17:25.1930055Z }

But perhaps this is due to two configs being sent into the cluster. I'll backtrack and disable the init I'm doing here and see if the cluster becomes available.

Note that we end up stuck in the loop that's checking for rs.status().ok which did not return 1 for 15 minutes.

Your questions:

dave-yotta commented 3 years ago

Ok - yes the replica set does come up as you've described - looks like you're adding members a different way here: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/4.4/debian-10/rootfs/opt/bitnami/scripts/libmongodb.sh#L593 (so I don't know what I was on about - nevermind)

I'll just see what's happening if I do this on CI now. Is there an init-container I should follow the logs of to see what's going on with the data-dir?

dave-yotta commented 3 years ago

Ok it's not starting after waiting 5mins on the CI system here (kubectl get all and logs of all pods - missing init container logs though):

2021-01-11T18:14:34.7875751Z NAME                                          READY   STATUS       RESTARTS   AGE
2021-01-11T18:14:34.7877359Z pod/local-redis-deployment-7dc456d96d-tsbtp   1/1     Running      0          7m4s
2021-01-11T18:14:34.7878239Z pod/local-mongo-0                             1/1     Running      0          7m3s
2021-01-11T18:14:34.7879392Z pod/local-mongo-1                             1/1     Running      1          5m51s
2021-01-11T18:14:34.7880208Z pod/local-mongo-2                             1/1     Running      1          5m26s
2021-01-11T18:14:34.7886266Z pod/run-tests-c7b2r                           0/1     Init:Error   0          7m3s
2021-01-11T18:14:34.7887405Z 
2021-01-11T18:14:34.7888834Z NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
2021-01-11T18:14:34.7889630Z service/kubernetes               ClusterIP   10.152.183.1     <none>        443/TCP           16m
2021-01-11T18:14:34.7890536Z service/local-mongo-headless     ClusterIP   None             <none>        27017/TCP         7m4s
2021-01-11T18:14:34.7892314Z service/local-redis-service      NodePort    10.152.183.38    <none>        30004:30004/TCP   7m4s
2021-01-11T18:14:34.7894871Z service/local-mongo-0-external   NodePort    10.152.183.208   <none>        27017:30001/TCP   7m4s
2021-01-11T18:14:34.7896160Z service/local-mongo-2-external   NodePort    10.152.183.86    <none>        27017:30003/TCP   7m4s
2021-01-11T18:14:34.7897335Z service/local-mongo-1-external   NodePort    10.152.183.232   <none>        27017:30002/TCP   7m4s
2021-01-11T18:14:34.7899060Z 
2021-01-11T18:14:34.7900468Z NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
2021-01-11T18:14:34.7902424Z deployment.apps/local-redis-deployment   1/1     1            1           7m4s
2021-01-11T18:14:34.7902973Z 
2021-01-11T18:14:34.7903461Z NAME                                                DESIRED   CURRENT   READY   AGE
2021-01-11T18:14:34.7904250Z replicaset.apps/local-redis-deployment-7dc456d96d   1         1         1       7m4s
2021-01-11T18:14:34.7904749Z 
2021-01-11T18:14:34.7905155Z NAME                           READY   AGE
2021-01-11T18:14:34.7905949Z statefulset.apps/local-mongo   3/3     7m4s
2021-01-11T18:14:34.7906532Z 
2021-01-11T18:14:34.7906898Z NAME                  COMPLETIONS   DURATION   AGE
2021-01-11T18:14:34.7908686Z job.batch/run-tests   0/1           7m3s       7m3s
2021-01-11T18:14:34.9653993Z logs of pod/local-redis-deployment-7dc456d96d-tsbtp
2021-01-11T18:14:35.1276457Z 1:C 11 Jan 2021 18:07:37.204 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2021-01-11T18:14:35.1277441Z 1:C 11 Jan 2021 18:07:37.204 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
2021-01-11T18:14:35.1278652Z 1:C 11 Jan 2021 18:07:37.204 # Configuration loaded
2021-01-11T18:14:35.1280789Z 1:M 11 Jan 2021 18:07:37.205 * Running mode=standalone, port=6379.
2021-01-11T18:14:35.1281866Z 1:M 11 Jan 2021 18:07:37.206 # Server initialized
2021-01-11T18:14:35.1284345Z 1:M 11 Jan 2021 18:07:37.206 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2021-01-11T18:14:35.1285786Z 1:M 11 Jan 2021 18:07:37.206 * Ready to accept connections
2021-01-11T18:14:35.1327678Z logs of pod/local-mongo-0
2021-01-11T18:14:35.2920523Z Advertised Hostname: 20.58.9.52
2021-01-11T18:14:35.2921562Z Pod name matches initial primary pod name, configuring node as a primary
2021-01-11T18:14:35.2922675Z mongodb 18:08:27.40 
2021-01-11T18:14:35.2924309Z mongodb 18:08:27.40 Welcome to the Bitnami mongodb container
2021-01-11T18:14:35.2925849Z mongodb 18:08:27.40 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-11T18:14:35.2927263Z mongodb 18:08:27.40 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-11T18:14:35.2928482Z mongodb 18:08:27.40 
2021-01-11T18:14:35.2929591Z mongodb 18:08:27.41 INFO  ==> ** Starting MongoDB setup **
2021-01-11T18:14:35.2931030Z mongodb 18:08:27.60 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-11T18:14:35.2932291Z mongodb 18:08:27.61 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-11T18:14:35.2934018Z mongodb 18:08:27.71 INFO  ==> Initializing MongoDB...
2021-01-11T18:14:35.2935415Z mongodb 18:08:27.91 INFO  ==> Deploying MongoDB from scratch...
2021-01-11T18:14:35.2936516Z mongodb 18:08:41.82 INFO  ==> Creating users...
2021-01-11T18:14:35.2938014Z mongodb 18:08:41.82 INFO  ==> Users created
2021-01-11T18:14:35.2939609Z mongodb 18:08:42.71 INFO  ==> Configuring MongoDB replica set...
2021-01-11T18:14:35.2940887Z mongodb 18:08:42.71 INFO  ==> Stopping MongoDB...
2021-01-11T18:14:35.2942018Z mongodb 18:09:04.91 INFO  ==> Configuring MongoDB primary node
2021-01-11T18:14:35.2962392Z logs of pod/local-mongo-1
2021-01-11T18:14:35.4486694Z Advertised Hostname: 20.58.9.52
2021-01-11T18:14:35.4488649Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-11T18:14:35.4489576Z mongodb 18:13:07.00 
2021-01-11T18:14:35.4490387Z mongodb 18:13:07.00 Welcome to the Bitnami mongodb container
2021-01-11T18:14:35.4491445Z mongodb 18:13:07.00 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-11T18:14:35.4492975Z mongodb 18:13:07.00 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-11T18:14:35.4494593Z mongodb 18:13:07.01 
2021-01-11T18:14:35.4495745Z mongodb 18:13:07.10 INFO  ==> ** Starting MongoDB setup **
2021-01-11T18:14:35.4496819Z mongodb 18:13:07.20 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-11T18:14:35.4497716Z mongodb 18:13:07.21 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-11T18:14:35.4501706Z mongodb 18:13:07.40 INFO  ==> Initializing MongoDB...
2021-01-11T18:14:35.4502864Z mongodb 18:13:07.60 INFO  ==> Deploying MongoDB with persisted data...
2021-01-11T18:14:35.4503724Z mongodb 18:13:07.81 INFO  ==> ** MongoDB setup finished! **
2021-01-11T18:14:35.4504368Z 
2021-01-11T18:14:35.4504910Z mongodb 18:13:08.00 INFO  ==> ** Starting MongoDB **
2021-01-11T18:14:35.4505670Z 2021-01-11T18:13:08.213+0000 I  CONTROL  [main] ***** SERVER RESTARTED *****
2021-01-11T18:14:35.4506564Z 2021-01-11T18:13:08.214+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2021-01-11T18:14:35.4507448Z 2021-01-11T18:13:08.304+0000 W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
2021-01-11T18:14:35.4508380Z 2021-01-11T18:13:08.307+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/bitnami/mongodb/data/db 64-bit host=local-mongo-1
2021-01-11T18:14:35.4509270Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] db version v4.2.11
2021-01-11T18:14:35.4510119Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] git version: ea38428f0c6742c7c2c7f677e73d79e17a2aab96
2021-01-11T18:14:35.4510969Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
2021-01-11T18:14:35.4511784Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
2021-01-11T18:14:35.4512574Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] modules: none
2021-01-11T18:14:35.4513336Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] build environment:
2021-01-11T18:14:35.4514098Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten]     distmod: debian10
2021-01-11T18:14:35.4514874Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten]     distarch: x86_64
2021-01-11T18:14:35.4515653Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
2021-01-11T18:14:35.4516541Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] 400 MB of memory available to the process out of 6954 MB total system memory
2021-01-11T18:14:35.4518890Z 2021-01-11T18:13:08.308+0000 I  CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIp: "*", ipv6: false, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, replication: { enableMajorityReadConcern: true, replSetName: "rs1" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/opt/bitnami/mongodb/logs/mongodb.log", quiet: false, verbosity: 0 } }
2021-01-11T18:14:35.4522100Z 2021-01-11T18:13:08.311+0000 I  STORAGE  [initandlisten] Detected data files in /bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2021-01-11T18:14:35.4523659Z 2021-01-11T18:13:08.311+0000 I  STORAGE  [initandlisten] 
2021-01-11T18:14:35.4525099Z 2021-01-11T18:13:08.311+0000 I  STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-01-11T18:14:35.4527249Z 2021-01-11T18:13:08.311+0000 I  STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-01-11T18:14:35.4529347Z 2021-01-11T18:13:08.311+0000 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2021-01-11T18:14:35.4531726Z 2021-01-11T18:13:15.511+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388795:511697][1:0x7ff6ef8d1d40], txn-recover: Recovering log 2 through 3
2021-01-11T18:14:35.4532884Z 2021-01-11T18:13:16.109+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388796:109223][1:0x7ff6ef8d1d40], txn-recover: Recovering log 3 through 3
2021-01-11T18:14:35.4534832Z 2021-01-11T18:13:17.108+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388797:108219][1:0x7ff6ef8d1d40], txn-recover: Main recovery loop: starting at 2/25344 to 3/256
2021-01-11T18:14:35.4536263Z 2021-01-11T18:13:19.005+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388799:5583][1:0x7ff6ef8d1d40], txn-recover: Recovering log 2 through 3
2021-01-11T18:14:35.4537625Z 2021-01-11T18:13:19.809+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388799:809439][1:0x7ff6ef8d1d40], txn-recover: Recovering log 3 through 3
2021-01-11T18:14:35.4539233Z 2021-01-11T18:13:20.409+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388800:409565][1:0x7ff6ef8d1d40], txn-recover: Set global recovery timestamp: (0, 0)
2021-01-11T18:14:35.4540251Z 2021-01-11T18:13:20.506+0000 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2021-01-11T18:14:35.4541329Z 2021-01-11T18:13:20.509+0000 I  STORAGE  [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 0
2021-01-11T18:14:35.4542335Z 2021-01-11T18:13:20.512+0000 I  STORAGE  [initandlisten] Timestamp monitor starting
2021-01-11T18:14:35.4543056Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.4544208Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-01-11T18:14:35.4545345Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-01-11T18:14:35.4546106Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.4547142Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 27673 processes, 65536 files. Number of processes should be at least 32768 : 0.5 times number of files.
2021-01-11T18:14:35.4548086Z 2021-01-11T18:13:20.513+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.4548963Z 2021-01-11T18:13:20.514+0000 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2021-01-11T18:14:35.4550045Z 2021-01-11T18:13:20.604+0000 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
2021-01-11T18:14:35.4551181Z 2021-01-11T18:13:20.604+0000 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2021-01-11T18:14:35.4552374Z 2021-01-11T18:13:20.604+0000 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2021-01-11T18:14:35.4553526Z 2021-01-11T18:13:20.610+0000 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2021-01-11T18:14:35.4555098Z 2021-01-11T18:13:20.610+0000 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/bitnami/mongodb/data/db/diagnostic.data'
2021-01-11T18:14:35.4556163Z 2021-01-11T18:13:20.613+0000 I  SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
2021-01-11T18:14:35.4557421Z 2021-01-11T18:13:20.613+0000 I  SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
2021-01-11T18:14:35.4558441Z 2021-01-11T18:13:20.614+0000 I  REPL     [initandlisten] Did not find local initialized voted for document at startup.
2021-01-11T18:14:35.4592998Z 2021-01-11T18:13:20.616+0000 I  REPL     [initandlisten] Rollback ID is 1
2021-01-11T18:14:35.4595259Z 2021-01-11T18:13:20.616+0000 I  REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2021-01-11T18:14:35.4596748Z 2021-01-11T18:13:20.702+0000 I  CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-01-11T18:14:35.4597850Z 2021-01-11T18:13:20.703+0000 I  SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
2021-01-11T18:14:35.4598883Z 2021-01-11T18:13:20.703+0000 I  CONTROL  [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
2021-01-11T18:14:35.4600845Z 2021-01-11T18:13:20.703+0000 I  NETWORK  [listener] Listening on /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2021-01-11T18:14:35.4601987Z 2021-01-11T18:13:20.703+0000 I  NETWORK  [listener] Listening on 0.0.0.0
2021-01-11T18:14:35.4602953Z 2021-01-11T18:13:20.703+0000 I  NETWORK  [listener] waiting for connections on port 27017
2021-01-11T18:14:35.4604419Z 2021-01-11T18:13:21.001+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
2021-01-11T18:14:35.4605347Z 2021-01-11T18:13:21.202+0000 I  NETWORK  [listener] connection accepted from 10.1.72.198:36492 #1 (1 connection now open)
2021-01-11T18:14:35.4606853Z 2021-01-11T18:13:21.202+0000 I  NETWORK  [conn1] received client metadata from 10.1.72.198:36492 conn1: { driver: { name: "mongo-csharp-driver", version: "2.10.4.0" }, os: { type: "Linux", name: "Linux 5.4.0-1032-azure #33~18.04.1-Ubuntu SMP Tue Nov 17 11:40:52 UTC 2020", architecture: "x86_64", version: "5.4.0-1032-azure" }, platform: ".NET Core 3.1.10" }
2021-01-11T18:14:35.4608192Z 2021-01-11T18:13:27.908+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37600 #2 (2 connections now open)
2021-01-11T18:14:35.4609642Z 2021-01-11T18:13:27.908+0000 I  NETWORK  [conn2] received client metadata from 127.0.0.1:37600 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4610909Z 2021-01-11T18:13:28.006+0000 I  NETWORK  [conn2] end connection 127.0.0.1:37600 (1 connection now open)
2021-01-11T18:14:35.4611843Z 2021-01-11T18:13:36.813+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37722 #3 (2 connections now open)
2021-01-11T18:14:35.4613276Z 2021-01-11T18:13:36.816+0000 I  NETWORK  [conn3] received client metadata from 127.0.0.1:37722 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4614533Z 2021-01-11T18:13:36.907+0000 I  NETWORK  [conn3] end connection 127.0.0.1:37722 (1 connection now open)
2021-01-11T18:14:35.4615463Z 2021-01-11T18:13:37.811+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37732 #4 (2 connections now open)
2021-01-11T18:14:35.4616867Z 2021-01-11T18:13:37.903+0000 I  NETWORK  [conn4] received client metadata from 127.0.0.1:37732 conn4: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4618302Z 2021-01-11T18:13:37.909+0000 I  NETWORK  [conn4] end connection 127.0.0.1:37732 (1 connection now open)
2021-01-11T18:14:35.4619223Z 2021-01-11T18:13:46.809+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37788 #5 (2 connections now open)
2021-01-11T18:14:35.4620640Z 2021-01-11T18:13:46.903+0000 I  NETWORK  [conn5] received client metadata from 127.0.0.1:37788 conn5: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4622049Z 2021-01-11T18:13:46.908+0000 I  NETWORK  [conn5] end connection 127.0.0.1:37788 (1 connection now open)
2021-01-11T18:14:35.4623134Z 2021-01-11T18:13:47.806+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37800 #6 (2 connections now open)
2021-01-11T18:14:35.4624633Z 2021-01-11T18:13:47.806+0000 I  NETWORK  [conn6] received client metadata from 127.0.0.1:37800 conn6: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4626266Z 2021-01-11T18:13:47.902+0000 I  NETWORK  [conn6] end connection 127.0.0.1:37800 (1 connection now open)
2021-01-11T18:14:35.4627176Z 2021-01-11T18:13:56.810+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37860 #7 (2 connections now open)
2021-01-11T18:14:35.4628814Z 2021-01-11T18:13:56.810+0000 I  NETWORK  [conn7] received client metadata from 127.0.0.1:37860 conn7: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4630272Z 2021-01-11T18:13:56.908+0000 I  NETWORK  [conn7] end connection 127.0.0.1:37860 (1 connection now open)
2021-01-11T18:14:35.4631564Z 2021-01-11T18:13:57.905+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37870 #8 (2 connections now open)
2021-01-11T18:14:35.4633463Z 2021-01-11T18:13:57.906+0000 I  NETWORK  [conn8] received client metadata from 127.0.0.1:37870 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4635352Z 2021-01-11T18:13:57.911+0000 I  NETWORK  [conn8] end connection 127.0.0.1:37870 (1 connection now open)
2021-01-11T18:14:35.4636674Z 2021-01-11T18:14:03.194+0000 I  NETWORK  [conn1] end connection 10.1.72.198:36492 (0 connections now open)
2021-01-11T18:14:35.4637645Z 2021-01-11T18:14:06.906+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37930 #9 (1 connection now open)
2021-01-11T18:14:35.4644560Z 2021-01-11T18:14:06.907+0000 I  NETWORK  [conn9] received client metadata from 127.0.0.1:37930 conn9: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4645964Z 2021-01-11T18:14:07.005+0000 I  NETWORK  [conn9] end connection 127.0.0.1:37930 (0 connections now open)
2021-01-11T18:14:35.4646986Z 2021-01-11T18:14:07.811+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37934 #10 (1 connection now open)
2021-01-11T18:14:35.4648330Z 2021-01-11T18:14:07.811+0000 I  NETWORK  [conn10] received client metadata from 127.0.0.1:37934 conn10: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4685134Z 2021-01-11T18:14:07.908+0000 I  NETWORK  [conn10] end connection 127.0.0.1:37934 (0 connections now open)
2021-01-11T18:14:35.4686743Z 2021-01-11T18:14:16.907+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37996 #11 (1 connection now open)
2021-01-11T18:14:35.4688190Z 2021-01-11T18:14:16.908+0000 I  NETWORK  [conn11] received client metadata from 127.0.0.1:37996 conn11: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4689933Z 2021-01-11T18:14:17.005+0000 I  NETWORK  [conn11] end connection 127.0.0.1:37996 (0 connections now open)
2021-01-11T18:14:35.4690726Z 2021-01-11T18:14:17.907+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38002 #12 (1 connection now open)
2021-01-11T18:14:35.4692298Z 2021-01-11T18:14:17.908+0000 I  NETWORK  [conn12] received client metadata from 127.0.0.1:38002 conn12: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4693447Z 2021-01-11T18:14:18.008+0000 I  NETWORK  [conn12] end connection 127.0.0.1:38002 (0 connections now open)
2021-01-11T18:14:35.4694207Z 2021-01-11T18:14:26.903+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38070 #13 (1 connection now open)
2021-01-11T18:14:35.4695492Z 2021-01-11T18:14:26.903+0000 I  NETWORK  [conn13] received client metadata from 127.0.0.1:38070 conn13: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4696638Z 2021-01-11T18:14:26.924+0000 I  NETWORK  [conn13] end connection 127.0.0.1:38070 (0 connections now open)
2021-01-11T18:14:35.4697413Z 2021-01-11T18:14:27.905+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38084 #14 (1 connection now open)
2021-01-11T18:14:35.4698686Z 2021-01-11T18:14:27.906+0000 I  NETWORK  [conn14] received client metadata from 127.0.0.1:38084 conn14: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.4699806Z 2021-01-11T18:14:28.004+0000 I  NETWORK  [conn14] end connection 127.0.0.1:38084 (0 connections now open)
2021-01-11T18:14:35.4700356Z logs of pod/local-mongo-2
2021-01-11T18:14:35.6170900Z Advertised Hostname: 20.58.9.52
2021-01-11T18:14:35.6174252Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-11T18:14:35.6175429Z mongodb 18:13:24.00 
2021-01-11T18:14:35.6176304Z mongodb 18:13:24.00 Welcome to the Bitnami mongodb container
2021-01-11T18:14:35.6177393Z mongodb 18:13:24.01 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-11T18:14:35.6178710Z mongodb 18:13:24.01 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-11T18:14:35.6179638Z mongodb 18:13:24.10 
2021-01-11T18:14:35.6180485Z mongodb 18:13:24.10 INFO  ==> ** Starting MongoDB setup **
2021-01-11T18:14:35.6183694Z mongodb 18:13:24.21 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-11T18:14:35.6184664Z mongodb 18:13:24.22 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-11T18:14:35.6185751Z mongodb 18:13:24.40 INFO  ==> Initializing MongoDB...
2021-01-11T18:14:35.6186429Z mongodb 18:13:24.61 INFO  ==> Deploying MongoDB with persisted data...
2021-01-11T18:14:35.6186757Z 
2021-01-11T18:14:35.6187216Z mongodb 18:13:24.81 INFO  ==> ** MongoDB setup finished! **
2021-01-11T18:14:35.6188240Z mongodb 18:13:25.00 INFO  ==> ** Starting MongoDB **
2021-01-11T18:14:35.6189151Z 2021-01-11T18:13:25.211+0000 I  CONTROL  [main] ***** SERVER RESTARTED *****
2021-01-11T18:14:35.6190546Z 2021-01-11T18:13:25.212+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2021-01-11T18:14:35.6191640Z 2021-01-11T18:13:25.303+0000 W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
2021-01-11T18:14:35.6192591Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/bitnami/mongodb/data/db 64-bit host=local-mongo-2
2021-01-11T18:14:35.6193454Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] db version v4.2.11
2021-01-11T18:14:35.6194298Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] git version: ea38428f0c6742c7c2c7f677e73d79e17a2aab96
2021-01-11T18:14:35.6195179Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
2021-01-11T18:14:35.6196069Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
2021-01-11T18:14:35.6197033Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] modules: none
2021-01-11T18:14:35.6197983Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] build environment:
2021-01-11T18:14:35.6198827Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten]     distmod: debian10
2021-01-11T18:14:35.6200022Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten]     distarch: x86_64
2021-01-11T18:14:35.6201002Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
2021-01-11T18:14:35.6202087Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] 400 MB of memory available to the process out of 6954 MB total system memory
2021-01-11T18:14:35.6204499Z 2021-01-11T18:13:25.303+0000 I  CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIp: "*", ipv6: false, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, replication: { enableMajorityReadConcern: true, replSetName: "rs1" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/opt/bitnami/mongodb/logs/mongodb.log", quiet: false, verbosity: 0 } }
2021-01-11T18:14:35.6206849Z 2021-01-11T18:13:25.304+0000 I  STORAGE  [initandlisten] Detected data files in /bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2021-01-11T18:14:35.6207988Z 2021-01-11T18:13:25.304+0000 I  STORAGE  [initandlisten] 
2021-01-11T18:14:35.6209123Z 2021-01-11T18:13:25.304+0000 I  STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-01-11T18:14:35.6210387Z 2021-01-11T18:13:25.304+0000 I  STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-01-11T18:14:35.6216662Z 2021-01-11T18:13:25.304+0000 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2021-01-11T18:14:35.6219306Z 2021-01-11T18:13:32.708+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388812:708366][1:0x7f76ee680d40], txn-recover: Recovering log 2 through 3
2021-01-11T18:14:35.6226740Z 2021-01-11T18:13:33.306+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388813:306547][1:0x7f76ee680d40], txn-recover: Recovering log 3 through 3
2021-01-11T18:14:35.6239845Z 2021-01-11T18:13:34.307+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388814:307063][1:0x7f76ee680d40], txn-recover: Main recovery loop: starting at 2/25344 to 3/256
2021-01-11T18:14:35.6242071Z 2021-01-11T18:13:35.407+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388815:407202][1:0x7f76ee680d40], txn-recover: Recovering log 2 through 3
2021-01-11T18:14:35.6258386Z 2021-01-11T18:13:36.204+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388816:204038][1:0x7f76ee680d40], txn-recover: Recovering log 3 through 3
2021-01-11T18:14:35.6271523Z 2021-01-11T18:13:36.709+0000 I  STORAGE  [initandlisten] WiredTiger message [1610388816:709308][1:0x7f76ee680d40], txn-recover: Set global recovery timestamp: (0, 0)
2021-01-11T18:14:35.6288270Z 2021-01-11T18:13:37.108+0000 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2021-01-11T18:14:35.6302813Z 2021-01-11T18:13:37.111+0000 I  STORAGE  [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 0
2021-01-11T18:14:35.6318469Z 2021-01-11T18:13:37.203+0000 I  STORAGE  [initandlisten] Timestamp monitor starting
2021-01-11T18:14:35.6332358Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.6340854Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-01-11T18:14:35.6398142Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-01-11T18:14:35.6400638Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.6402111Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 27673 processes, 65536 files. Number of processes should be at least 32768 : 0.5 times number of files.
2021-01-11T18:14:35.6403378Z 2021-01-11T18:13:37.204+0000 I  CONTROL  [initandlisten] 
2021-01-11T18:14:35.6404602Z 2021-01-11T18:13:37.206+0000 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2021-01-11T18:14:35.6405591Z 2021-01-11T18:13:37.207+0000 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
2021-01-11T18:14:35.6406426Z 2021-01-11T18:13:37.207+0000 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2021-01-11T18:14:35.6407476Z 2021-01-11T18:13:37.207+0000 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2021-01-11T18:14:35.6408396Z 2021-01-11T18:13:37.209+0000 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2021-01-11T18:14:35.6409833Z 2021-01-11T18:13:37.209+0000 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/bitnami/mongodb/data/db/diagnostic.data'
2021-01-11T18:14:35.6411089Z 2021-01-11T18:13:37.211+0000 I  SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
2021-01-11T18:14:35.6412728Z 2021-01-11T18:13:37.211+0000 I  SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
2021-01-11T18:14:35.6414143Z 2021-01-11T18:13:37.211+0000 I  REPL     [initandlisten] Did not find local initialized voted for document at startup.
2021-01-11T18:14:35.6415027Z 2021-01-11T18:13:37.213+0000 I  REPL     [initandlisten] Rollback ID is 1
2021-01-11T18:14:35.6416195Z 2021-01-11T18:13:37.213+0000 I  REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2021-01-11T18:14:35.6417742Z 2021-01-11T18:13:37.304+0000 I  CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-01-11T18:14:35.6419226Z 2021-01-11T18:13:37.304+0000 I  SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
2021-01-11T18:14:35.6420697Z 2021-01-11T18:13:37.305+0000 I  CONTROL  [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
2021-01-11T18:14:35.6422108Z 2021-01-11T18:13:37.305+0000 I  NETWORK  [listener] Listening on /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2021-01-11T18:14:35.6422999Z 2021-01-11T18:13:37.305+0000 I  NETWORK  [listener] Listening on 0.0.0.0
2021-01-11T18:14:35.6423683Z 2021-01-11T18:13:37.305+0000 I  NETWORK  [listener] waiting for connections on port 27017
2021-01-11T18:14:35.6424470Z 2021-01-11T18:13:37.712+0000 I  NETWORK  [listener] connection accepted from 10.1.72.198:51012 #1 (1 connection now open)
2021-01-11T18:14:35.6425967Z 2021-01-11T18:13:37.712+0000 I  NETWORK  [conn1] received client metadata from 10.1.72.198:51012 conn1: { driver: { name: "mongo-csharp-driver", version: "2.10.4.0" }, os: { type: "Linux", name: "Linux 5.4.0-1032-azure #33~18.04.1-Ubuntu SMP Tue Nov 17 11:40:52 UTC 2020", architecture: "x86_64", version: "5.4.0-1032-azure" }, platform: ".NET Core 3.1.10" }
2021-01-11T18:14:35.6427300Z 2021-01-11T18:13:38.002+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
2021-01-11T18:14:35.6428190Z 2021-01-11T18:13:38.605+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37744 #2 (2 connections now open)
2021-01-11T18:14:35.6429620Z 2021-01-11T18:13:38.617+0000 I  NETWORK  [conn2] received client metadata from 127.0.0.1:37744 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6430877Z 2021-01-11T18:13:38.703+0000 I  NETWORK  [conn2] end connection 127.0.0.1:37744 (1 connection now open)
2021-01-11T18:14:35.6431712Z 2021-01-11T18:13:48.604+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37806 #3 (2 connections now open)
2021-01-11T18:14:35.6433134Z 2021-01-11T18:13:48.616+0000 I  NETWORK  [conn3] received client metadata from 127.0.0.1:37806 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6434381Z 2021-01-11T18:13:48.621+0000 I  NETWORK  [conn3] end connection 127.0.0.1:37806 (1 connection now open)
2021-01-11T18:14:35.6435236Z 2021-01-11T18:13:58.610+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37874 #4 (2 connections now open)
2021-01-11T18:14:35.6436654Z 2021-01-11T18:13:58.612+0000 I  NETWORK  [conn4] received client metadata from 127.0.0.1:37874 conn4: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6437901Z 2021-01-11T18:13:58.707+0000 I  NETWORK  [conn4] end connection 127.0.0.1:37874 (1 connection now open)
2021-01-11T18:14:35.6438749Z 2021-01-11T18:14:01.706+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37886 #5 (2 connections now open)
2021-01-11T18:14:35.6440466Z 2021-01-11T18:14:01.707+0000 I  NETWORK  [conn5] received client metadata from 127.0.0.1:37886 conn5: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6441876Z 2021-01-11T18:14:01.806+0000 I  NETWORK  [conn5] end connection 127.0.0.1:37886 (1 connection now open)
2021-01-11T18:14:35.6442638Z 2021-01-11T18:14:03.194+0000 I  NETWORK  [conn1] end connection 10.1.72.198:51012 (0 connections now open)
2021-01-11T18:14:35.6443417Z 2021-01-11T18:14:08.511+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37944 #6 (1 connection now open)
2021-01-11T18:14:35.6444765Z 2021-01-11T18:14:08.511+0000 I  NETWORK  [conn6] received client metadata from 127.0.0.1:37944 conn6: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6447016Z 2021-01-11T18:14:08.611+0000 I  NETWORK  [conn6] end connection 127.0.0.1:37944 (0 connections now open)
2021-01-11T18:14:35.6448304Z 2021-01-11T18:14:11.704+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:37964 #7 (1 connection now open)
2021-01-11T18:14:35.6450879Z 2021-01-11T18:14:11.704+0000 I  NETWORK  [conn7] received client metadata from 127.0.0.1:37964 conn7: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6452148Z 2021-01-11T18:14:11.709+0000 I  NETWORK  [conn7] end connection 127.0.0.1:37964 (0 connections now open)
2021-01-11T18:14:35.6452986Z 2021-01-11T18:14:18.508+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38008 #8 (1 connection now open)
2021-01-11T18:14:35.6454364Z 2021-01-11T18:14:18.509+0000 I  NETWORK  [conn8] received client metadata from 127.0.0.1:38008 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6455570Z 2021-01-11T18:14:18.609+0000 I  NETWORK  [conn8] end connection 127.0.0.1:38008 (0 connections now open)
2021-01-11T18:14:35.6456394Z 2021-01-11T18:14:21.611+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38034 #9 (1 connection now open)
2021-01-11T18:14:35.6457937Z 2021-01-11T18:14:21.611+0000 I  NETWORK  [conn9] received client metadata from 127.0.0.1:38034 conn9: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6459435Z 2021-01-11T18:14:21.705+0000 I  NETWORK  [conn9] end connection 127.0.0.1:38034 (0 connections now open)
2021-01-11T18:14:35.6460281Z 2021-01-11T18:14:28.615+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38090 #10 (1 connection now open)
2021-01-11T18:14:35.6462091Z 2021-01-11T18:14:28.615+0000 I  NETWORK  [conn10] received client metadata from 127.0.0.1:38090 conn10: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6463338Z 2021-01-11T18:14:28.708+0000 I  NETWORK  [conn10] end connection 127.0.0.1:38090 (0 connections now open)
2021-01-11T18:14:35.6464194Z 2021-01-11T18:14:31.606+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38108 #11 (1 connection now open)
2021-01-11T18:14:35.6465629Z 2021-01-11T18:14:31.606+0000 I  NETWORK  [conn11] received client metadata from 127.0.0.1:38108 conn11: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-11T18:14:35.6467030Z 2021-01-11T18:14:31.705+0000 I  NETWORK  [conn11] end connection 127.0.0.1:38108 (0 connections now open)
2021-01-11T18:14:35.6482942Z logs of pod/run-tests-c7b2r
2021-01-11T18:14:35.7979462Z Error from server (BadRequest): container "export-files" in pod "run-tests-c7b2r" is waiting to start: PodInitializing
2021-01-11T18:14:35.9455540Z error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
2021-01-11T18:14:35.9457092Z    template was:
2021-01-11T18:14:35.9457482Z        {.items[0].metadata.name}
2021-01-11T18:14:35.9457766Z    object given to jsonpath engine was:
2021-01-11T18:14:35.9458251Z        map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
2021-01-11T18:14:35.9458859Z 
2021-01-11T18:14:35.9458974Z 
2021-01-11T18:14:36.0837837Z error: arguments in resource/name form must have a single resource and name
2021-01-11T18:14:36.2135739Z error: arguments in resource/name form must have a single resource and name
2021-01-11T18:14:36.2172749Z Failed to get test results from container

The primary there looks to be in an odd state? It's stuck on Configuring MongoDB primary node on logs of pod/local-mongo-0 there. The secondaries have started mongod but seem to believe there's been a restart - which implies the data was not deleted? However the primary seems to think the db directory is empty Deploying MongoDB from scratch...?

Sorry for the long logs hope it helps :D I'll try to get the logs from the init containers but I think that line Configuring MongoDB primary node means the db directory - at least for the primary - is empty.

Edit: That said - the secondaries have both had one restart (presumably because they could not find the primary and are now stuck in election hell?) so thier data dir might also have been empty on the first run.

carrodher commented 3 years ago

Thanks for the detailed information, I am trying to reproduce the issue but without luck at this moment, it seems something for a particular scenario.

In the container logs, we can see more information or debug traces by setting the following parameter in the values:

image:
  ## Set to true if you would like to see extra information on logs
  debug: true

Apart from the container logs, can you see any issue or weird message when describing pods or other resources? Like issues mounting volumes or something like that?

dave-yotta commented 3 years ago

Ok here's the debug and previous logs. The volume seems mounted ok - looks like both primary and secondaries are undergoing setup. The secondaries look to have completed thier scripts and restarted, but he master is stuck in the config step?
While stuck in this state, I am able to contact the primary to do rs.status() getting "no replset config has been provided".

Is there any way to disable the replica set initialisation on bitnami so that I can do it using the method above? That is, waiting for all mongod pods to come up, and then passing the full config to the master, as opposed to bringing up the master and initialising it as primary then having the secondaries call rs.add as they come up?

Might not result in anything different but I'm not seeing what the problem could be here....

Also what is the value of MONGODB_MAX_TIMEOUT? I'm waiting 5min + setup time here which is around 7min according to the logs - could wait longer... But init of replication is usually immediate so it does seem like there's a problem here - I'm not seeing the output of mongodb_is_primary_node_initiated which is presumably the loop the master is stuck in?

As for repro - I'm only getting this using microk8s on azure devops CI, I've not seen this happen on my win10 docker kube cluster.

2021-01-12T13:29:05.1748766Z NAME                                          READY   STATUS       RESTARTS   AGE
2021-01-12T13:29:05.1756278Z pod/local-redis-deployment-7dc456d96d-mjcvt   1/1     Running      0          7m8s
2021-01-12T13:29:05.1757503Z pod/local-mongo-0                             1/1     Running      0          7m8s
2021-01-12T13:29:05.1758746Z pod/check-replica-set-zthgq                   1/1     Running      0          7m8s
2021-01-12T13:29:05.1760051Z pod/local-mongo-1                             1/1     Running      1          6m
2021-01-12T13:29:05.1761285Z pod/run-tests-4xb5h                           0/1     Init:Error   0          7m8s
2021-01-12T13:29:05.1770092Z pod/local-mongo-2                             1/1     Running      1          4m56s
2021-01-12T13:29:05.1770859Z 
2021-01-12T13:29:05.1771772Z NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
2021-01-12T13:29:05.1772549Z service/kubernetes               ClusterIP   10.152.183.1     <none>        443/TCP           16m
2021-01-12T13:29:05.1773590Z service/local-mongo-headless     ClusterIP   None             <none>        27017/TCP         7m9s
2021-01-12T13:29:05.1774681Z service/local-mongo-2-external   NodePort    10.152.183.30    <none>        27017:30003/TCP   7m9s
2021-01-12T13:29:05.1775794Z service/local-redis-service      NodePort    10.152.183.5     <none>        30004:30004/TCP   7m8s
2021-01-12T13:29:05.1776874Z service/local-mongo-0-external   NodePort    10.152.183.37    <none>        27017:30001/TCP   7m8s
2021-01-12T13:29:05.1777990Z service/local-mongo-1-external   NodePort    10.152.183.241   <none>        27017:30002/TCP   7m8s
2021-01-12T13:29:05.1778633Z 
2021-01-12T13:29:05.1779391Z NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
2021-01-12T13:29:05.1780763Z deployment.apps/local-redis-deployment   1/1     1            1           7m8s
2021-01-12T13:29:05.1781398Z 
2021-01-12T13:29:05.1781943Z NAME                                                DESIRED   CURRENT   READY   AGE
2021-01-12T13:29:05.1782852Z replicaset.apps/local-redis-deployment-7dc456d96d   1         1         1       7m8s
2021-01-12T13:29:05.1783423Z 
2021-01-12T13:29:05.1783905Z NAME                           READY   AGE
2021-01-12T13:29:05.1785052Z statefulset.apps/local-mongo   3/3     7m8s
2021-01-12T13:29:05.1786216Z 
2021-01-12T13:29:05.1786901Z NAME                          COMPLETIONS   DURATION   AGE
2021-01-12T13:29:05.1788289Z job.batch/check-replica-set   0/1           7m8s       7m8s
2021-01-12T13:29:05.1791381Z job.batch/run-tests           0/1           7m8s       7m8s

2021-01-12T13:29:05.5970854Z describing pod/local-mongo-0
2021-01-12T13:29:05.7891966Z Name:         local-mongo-0
2021-01-12T13:29:05.7892960Z Namespace:    default
2021-01-12T13:29:05.7893416Z Priority:     0
2021-01-12T13:29:05.7894338Z Node:         fv-az50-322/10.1.0.4
2021-01-12T13:29:05.7894904Z Start Time:   Tue, 12 Jan 2021 13:21:57 +0000
2021-01-12T13:29:05.7895399Z Labels:       app.kubernetes.io/component=mongodb
2021-01-12T13:29:05.7896205Z               app.kubernetes.io/instance=alloy-test
2021-01-12T13:29:05.7897134Z               app.kubernetes.io/managed-by=Helm
2021-01-12T13:29:05.7897680Z               app.kubernetes.io/name=mongodb
2021-01-12T13:29:05.7898421Z               controller-revision-hash=local-mongo-675554d77f
2021-01-12T13:29:05.7899265Z               helm.sh/chart=mongodb-10.3.1
2021-01-12T13:29:05.7900189Z               statefulset.kubernetes.io/pod-name=local-mongo-0
2021-01-12T13:29:05.7900752Z Annotations:  cni.projectcalico.org/podIP: 10.1.72.198/32
2021-01-12T13:29:05.7901275Z               cni.projectcalico.org/podIPs: 10.1.72.198/32
2021-01-12T13:29:05.7901721Z Status:       Running
2021-01-12T13:29:05.7902117Z IP:           10.1.72.198
2021-01-12T13:29:05.7902496Z IPs:
2021-01-12T13:29:05.7902905Z   IP:           10.1.72.198
2021-01-12T13:29:05.7903518Z Controlled By:  StatefulSet/local-mongo
2021-01-12T13:29:05.7904016Z Containers:
2021-01-12T13:29:05.7904280Z   mongodb:
2021-01-12T13:29:05.7905875Z     Container ID:  containerd://689618e824c44ac82b9f008280df41d3697eadb297f64f4aa544625f64631716
2021-01-12T13:29:05.7906526Z     Image:         docker.io/bitnami/mongodb:4.2
2021-01-12T13:29:05.7907101Z     Image ID:      docker.io/bitnami/mongodb@sha256:d18fa4c7f5ab80cbd288e46c44b89682cc7139ca1d2a3f35cafcc4c9346a87e5
2021-01-12T13:29:05.7907646Z     Port:          27017/TCP
2021-01-12T13:29:05.7908049Z     Host Port:     0/TCP
2021-01-12T13:29:05.7908442Z     Command:
2021-01-12T13:29:05.7908761Z       /scripts/setup.sh
2021-01-12T13:29:05.7909250Z     State:          Running
2021-01-12T13:29:05.7909660Z       Started:      Tue, 12 Jan 2021 13:22:53 +0000
2021-01-12T13:29:05.7910081Z     Ready:          True
2021-01-12T13:29:05.7910446Z     Restart Count:  0
2021-01-12T13:29:05.7910753Z     Limits:
2021-01-12T13:29:05.7911092Z       cpu:     100m
2021-01-12T13:29:05.7911371Z       memory:  400Mi
2021-01-12T13:29:05.7911976Z     Requests:
2021-01-12T13:29:05.7912337Z       cpu:      100m
2021-01-12T13:29:05.7912703Z       memory:   400Mi
2021-01-12T13:29:05.7913640Z     Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:05.7914637Z     Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:05.7915230Z     Environment:
2021-01-12T13:29:05.7915627Z       BITNAMI_DEBUG:                    true
2021-01-12T13:29:05.7916339Z       MY_POD_NAME:                      local-mongo-0 (v1:metadata.name)
2021-01-12T13:29:05.7916952Z       MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
2021-01-12T13:29:05.7917670Z       K8S_SERVICE_NAME:                 local-mongo-headless
2021-01-12T13:29:05.7918536Z       MONGODB_INITIAL_PRIMARY_HOST:     local-mongo-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
2021-01-12T13:29:05.7919155Z       MONGODB_REPLICA_SET_NAME:         rs1
2021-01-12T13:29:05.7919583Z       ALLOW_EMPTY_PASSWORD:             yes
2021-01-12T13:29:05.7920015Z       MONGODB_SYSTEM_LOG_VERBOSITY:     0
2021-01-12T13:29:05.7920448Z       MONGODB_DISABLE_SYSTEM_LOG:       no
2021-01-12T13:29:05.7920896Z       MONGODB_ENABLE_IPV6:              no
2021-01-12T13:29:05.7921344Z       MONGODB_ENABLE_DIRECTORY_PER_DB:  no
2021-01-12T13:29:05.7921743Z     Mounts:
2021-01-12T13:29:05.7922061Z       /bitnami/mongodb from datadir (rw)
2021-01-12T13:29:05.7922481Z       /scripts/setup.sh from scripts (rw,path="setup.sh")
2021-01-12T13:29:05.7923224Z       /var/run/secrets/kubernetes.io/serviceaccount from local-mongo-token-slsg7 (ro)
2021-01-12T13:29:05.7923759Z Conditions:
2021-01-12T13:29:05.7924083Z   Type              Status
2021-01-12T13:29:05.7924450Z   Initialized       True 
2021-01-12T13:29:05.7924800Z   Ready             True 
2021-01-12T13:29:05.7925160Z   ContainersReady   True 
2021-01-12T13:29:05.7925494Z   PodScheduled      True 
2021-01-12T13:29:05.7925826Z Volumes:
2021-01-12T13:29:05.7926075Z   datadir:
2021-01-12T13:29:05.7926515Z     Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
2021-01-12T13:29:05.7927212Z     ClaimName:  datadir-local-mongo-0
2021-01-12T13:29:05.7927709Z     ReadOnly:   false
2021-01-12T13:29:05.7928035Z   scripts:
2021-01-12T13:29:05.7928416Z     Type:      ConfigMap (a volume populated by a ConfigMap)
2021-01-12T13:29:05.7929042Z     Name:      local-mongo-scripts
2021-01-12T13:29:05.7929485Z     Optional:  false
2021-01-12T13:29:05.7930043Z   local-mongo-token-slsg7:
2021-01-12T13:29:05.7930551Z     Type:        Secret (a volume populated by a Secret)
2021-01-12T13:29:05.7931176Z     SecretName:  local-mongo-token-slsg7
2021-01-12T13:29:05.7931634Z     Optional:    false
2021-01-12T13:29:05.7932004Z QoS Class:       Guaranteed
2021-01-12T13:29:05.7932473Z Node-Selectors:  <none>
2021-01-12T13:29:05.7933179Z Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
2021-01-12T13:29:05.7933787Z                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
2021-01-12T13:29:05.7934202Z Events:
2021-01-12T13:29:05.7934516Z   Type     Reason            Age    From               Message
2021-01-12T13:29:05.7935185Z   ----     ------            ----   ----               -------
2021-01-12T13:29:05.7936066Z   Warning  FailedScheduling  7m8s   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:05.7937028Z   Normal   Scheduled         7m8s   default-scheduler  Successfully assigned default/local-mongo-0 to fv-az50-322
2021-01-12T13:29:05.7937692Z   Normal   Pulling           7m6s   kubelet            Pulling image "docker.io/bitnami/mongodb:4.2"
2021-01-12T13:29:05.7938318Z   Normal   Pulled            6m19s  kubelet            Successfully pulled image "docker.io/bitnami/mongodb:4.2" in 47.048588772s
2021-01-12T13:29:05.7938895Z   Normal   Created           6m12s  kubelet            Created container mongodb
2021-01-12T13:29:05.7939542Z   Normal   Started           6m12s  kubelet            Started container mongodb
2021-01-12T13:29:05.7940230Z   Warning  Unhealthy         5m50s  kubelet            Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:05.7940876Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:05.7942094Z 2021-01-12T13:23:15.033+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:05.7942934Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:05.7943335Z @(connect):2:6
2021-01-12T13:29:05.7944000Z 2021-01-12T13:23:15.036+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:05.7944828Z 2021-01-12T13:23:15.036+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:05.7945625Z   Warning  Unhealthy  5m41s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:05.7946525Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:05.7947836Z 2021-01-12T13:23:24.440+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:05.7948740Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:05.7949168Z @(connect):2:6
2021-01-12T13:29:05.7950208Z 2021-01-12T13:23:24.532+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:05.7978811Z 2021-01-12T13:23:24.532+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:05.7979549Z   Warning  Unhealthy  5m39s  kubelet  Liveness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:05.7980214Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:05.7981424Z 2021-01-12T13:23:26.234+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:05.7982245Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:05.7982642Z @(connect):2:6
2021-01-12T13:29:05.7983323Z 2021-01-12T13:23:26.236+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:05.7984329Z 2021-01-12T13:23:26.236+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:05.7988446Z previous logs of pod/local-mongo-0
2021-01-12T13:29:05.9695863Z Error from server (BadRequest): previous terminated container "mongodb" in pod "local-mongo-0" not found
2021-01-12T13:29:05.9729004Z logs of pod/local-mongo-0
2021-01-12T13:29:06.1242338Z Advertised Hostname: 51.132.1.251
2021-01-12T13:29:06.1247088Z Pod name matches initial primary pod name, configuring node as a primary
2021-01-12T13:29:06.1249208Z mongodb 13:22:53.73 
2021-01-12T13:29:06.1250070Z mongodb 13:22:53.74 Welcome to the Bitnami mongodb container
2021-01-12T13:29:06.1250989Z mongodb 13:22:53.75 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-12T13:29:06.1252187Z mongodb 13:22:53.75 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-12T13:29:06.1256882Z mongodb 13:22:53.83 
2021-01-12T13:29:06.1258778Z mongodb 13:22:53.86 INFO  ==> ** Starting MongoDB setup **
2021-01-12T13:29:06.1260116Z mongodb 13:22:53.95 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-12T13:29:06.1261365Z mongodb 13:22:53.97 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-12T13:29:06.1262466Z mongodb 13:22:54.15 INFO  ==> Initializing MongoDB...
2021-01-12T13:29:06.1263709Z mongodb 13:22:54.34 INFO  ==> Deploying MongoDB from scratch...
2021-01-12T13:29:06.1264847Z mongodb 13:22:54.43 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:06.1265773Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:06.1266306Z forked process: 43
2021-01-12T13:29:06.1266713Z child process started successfully, parent exiting
2021-01-12T13:29:06.1267515Z mongodb 13:23:08.13 INFO  ==> Creating users...
2021-01-12T13:29:06.1268690Z mongodb 13:23:08.13 INFO  ==> Users created
2021-01-12T13:29:06.1270412Z mongodb 13:23:08.43 INFO  ==> Configuring MongoDB replica set...
2021-01-12T13:29:06.1271430Z mongodb 13:23:08.44 INFO  ==> Stopping MongoDB...
2021-01-12T13:29:06.1272380Z mongodb 13:23:11.45 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:06.1273044Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:06.1273521Z forked process: 127
2021-01-12T13:29:06.1273928Z child process started successfully, parent exiting
2021-01-12T13:29:06.1274722Z mongodb 13:23:34.13 INFO  ==> Configuring MongoDB primary node
2021-01-12T13:29:06.1290506Z describing pod/check-replica-set-zthgq
2021-01-12T13:29:06.3091866Z Name:         check-replica-set-zthgq
2021-01-12T13:29:06.3094040Z Namespace:    default
2021-01-12T13:29:06.3094916Z Priority:     0
2021-01-12T13:29:06.3095854Z Node:         fv-az50-322/10.1.0.4
2021-01-12T13:29:06.3096299Z Start Time:   Tue, 12 Jan 2021 13:21:57 +0000
2021-01-12T13:29:06.3096898Z Labels:       app=check-replica-set
2021-01-12T13:29:06.3097615Z               controller-uid=fdb0b620-bd09-448e-9158-ae20a8474be4
2021-01-12T13:29:06.3098316Z               job-name=check-replica-set
2021-01-12T13:29:06.3098781Z Annotations:  cni.projectcalico.org/podIP: 10.1.72.200/32
2021-01-12T13:29:06.3099233Z               cni.projectcalico.org/podIPs: 10.1.72.200/32
2021-01-12T13:29:06.3099604Z Status:       Running
2021-01-12T13:29:06.3099903Z IP:           10.1.72.200
2021-01-12T13:29:06.3100188Z IPs:
2021-01-12T13:29:06.3100475Z   IP:           10.1.72.200
2021-01-12T13:29:06.3101016Z Controlled By:  Job/check-replica-set
2021-01-12T13:29:06.3101527Z Containers:
2021-01-12T13:29:06.3102122Z   check-replica-set:
2021-01-12T13:29:06.3102600Z     Container ID:  containerd://0957b777d0001ee0c6fe46c8a62eb7cc5023b25267333fa18ea76a84679a0462
2021-01-12T13:29:06.3103048Z     Image:         mongo:4.2
2021-01-12T13:29:06.3103490Z     Image ID:      docker.io/library/mongo@sha256:8e57b654461d342730fc716ed7a2b864dd2eb1557650ca4b5054f57a808ad857
2021-01-12T13:29:06.3103941Z     Port:          <none>
2021-01-12T13:29:06.3104224Z     Host Port:     <none>
2021-01-12T13:29:06.3104760Z     Command:
2021-01-12T13:29:06.3105000Z       /bin/bash
2021-01-12T13:29:06.3105661Z       -c
2021-01-12T13:29:06.3105960Z     Args:
2021-01-12T13:29:06.3106982Z        for host in 0 1 2; do stdbuf -o0 printf "checking for member $host"; while ! mongo local-mongo-$host-external:27017 --eval "db.version()" > /dev/null 2>&1; do sleep 1; stdbuf -o0 printf .; done; echo ok; done;
2021-01-12T13:29:06.3108729Z       mongo local-mongo-0-external:27017 --eval "rs.status()"; mongo local-mongo-1-external:27017 --eval "rs.status()"; mongo local-mongo-2-external:27017 --eval "rs.status()";
2021-01-12T13:29:06.3110068Z       stdbuf -o0 printf "checking for replication OK"; while ! mongo local-mongo-0-external:27017 --eval "'SETOKVALUE='+ rs.status().ok" | grep "SETOKVALUE=1" > /dev/null 2>&1; do sleep 1; stdbuf -o0 printf .; done; echo ok;
2021-01-12T13:29:06.3111308Z       mongo local-mongo-0-external:27017 --eval "rs.status()"; mongo local-mongo-1-external:27017 --eval "rs.status()"; mongo local-mongo-2-external:27017 --eval "rs.status()";
2021-01-12T13:29:06.3112908Z       mongo local-mongo-0-external:27017 --eval "rs.serverStatus()"; mongo local-mongo-1-external:27017 --eval "rs.serverStatus()"; mongo local-mongo-2-external:27017 --eval "rs.serverStatus()"; 
2021-01-12T13:29:06.3113759Z     State:          Running
2021-01-12T13:29:06.3114125Z       Started:      Tue, 12 Jan 2021 13:23:28 +0000
2021-01-12T13:29:06.3114614Z     Ready:          True
2021-01-12T13:29:06.3114910Z     Restart Count:  0
2021-01-12T13:29:06.3115227Z     Environment:    <none>
2021-01-12T13:29:06.3115521Z     Mounts:
2021-01-12T13:29:06.3116146Z       /var/run/secrets/kubernetes.io/serviceaccount from default-token-nqvl8 (ro)
2021-01-12T13:29:06.3116596Z Conditions:
2021-01-12T13:29:06.3116872Z   Type              Status
2021-01-12T13:29:06.3117178Z   Initialized       True 
2021-01-12T13:29:06.3117607Z   Ready             True 
2021-01-12T13:29:06.3117892Z   ContainersReady   True 
2021-01-12T13:29:06.3118182Z   PodScheduled      True 
2021-01-12T13:29:06.3118441Z Volumes:
2021-01-12T13:29:06.3118904Z   default-token-nqvl8:
2021-01-12T13:29:06.3119300Z     Type:        Secret (a volume populated by a Secret)
2021-01-12T13:29:06.3119865Z     SecretName:  default-token-nqvl8
2021-01-12T13:29:06.3120233Z     Optional:    false
2021-01-12T13:29:06.3120644Z QoS Class:       BestEffort
2021-01-12T13:29:06.3121142Z Node-Selectors:  <none>
2021-01-12T13:29:06.3121789Z Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
2021-01-12T13:29:06.3122448Z                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
2021-01-12T13:29:06.3122823Z Events:
2021-01-12T13:29:06.3123139Z   Type    Reason     Age    From               Message
2021-01-12T13:29:06.3123762Z   ----    ------     ----   ----               -------
2021-01-12T13:29:06.3124491Z   Normal  Scheduled  7m9s   default-scheduler  Successfully assigned default/check-replica-set-zthgq to fv-az50-322
2021-01-12T13:29:06.3125048Z   Normal  Pulling    7m7s   kubelet            Pulling image "mongo:4.2"
2021-01-12T13:29:06.3125590Z   Normal  Pulled     5m56s  kubelet            Successfully pulled image "mongo:4.2" in 1m11.472113529s
2021-01-12T13:29:06.3126319Z   Normal  Created    5m38s  kubelet            Created container check-replica-set
2021-01-12T13:29:06.3128001Z   Normal  Started    5m38s  kubelet            Started container check-replica-set
2021-01-12T13:29:06.3139634Z previous logs of pod/check-replica-set-zthgq
2021-01-12T13:29:06.4744439Z Error from server (BadRequest): previous terminated container "check-replica-set" in pod "check-replica-set-zthgq" not found
2021-01-12T13:29:06.4793798Z logs of pod/check-replica-set-zthgq
2021-01-12T13:29:06.6629614Z checking for member 0ok
2021-01-12T13:29:06.6632791Z checking for member 1.........................ok
2021-01-12T13:29:06.6633451Z checking for member 2...........................ok
2021-01-12T13:29:06.6633958Z MongoDB shell version v4.2.11
2021-01-12T13:29:06.6643900Z connecting to: mongodb://local-mongo-0-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.6645106Z Implicit session: session { "id" : UUID("4e4dd59e-ac01-400b-961a-b857d064b13d") }
2021-01-12T13:29:06.6645630Z MongoDB server version: 4.2.11
2021-01-12T13:29:06.6645860Z {
2021-01-12T13:29:06.6646091Z    "operationTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6646349Z    "ok" : 0,
2021-01-12T13:29:06.6646616Z    "errmsg" : "no replset config has been received",
2021-01-12T13:29:06.6646873Z    "code" : 94,
2021-01-12T13:29:06.6647120Z    "codeName" : "NotYetInitialized",
2021-01-12T13:29:06.6647377Z    "$clusterTime" : {
2021-01-12T13:29:06.6647625Z        "clusterTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6647990Z        "signature" : {
2021-01-12T13:29:06.6648248Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-12T13:29:06.6648525Z            "keyId" : NumberLong(0)
2021-01-12T13:29:06.6648728Z        }
2021-01-12T13:29:06.6648874Z    }
2021-01-12T13:29:06.6649033Z }
2021-01-12T13:29:06.6649214Z MongoDB shell version v4.2.11
2021-01-12T13:29:06.6650155Z connecting to: mongodb://local-mongo-1-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.6651043Z Implicit session: session { "id" : UUID("d0bbde6f-f81a-43ad-868c-5888adeabe9c") }
2021-01-12T13:29:06.6651401Z MongoDB server version: 4.2.11
2021-01-12T13:29:06.6651615Z {
2021-01-12T13:29:06.6651813Z    "operationTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6652045Z    "ok" : 0,
2021-01-12T13:29:06.6652294Z    "errmsg" : "no replset config has been received",
2021-01-12T13:29:06.6652533Z    "code" : 94,
2021-01-12T13:29:06.6652764Z    "codeName" : "NotYetInitialized",
2021-01-12T13:29:06.6652992Z    "$clusterTime" : {
2021-01-12T13:29:06.6653241Z        "clusterTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6653466Z        "signature" : {
2021-01-12T13:29:06.6653734Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-12T13:29:06.6654012Z            "keyId" : NumberLong(0)
2021-01-12T13:29:06.6654202Z        }
2021-01-12T13:29:06.6654364Z    }
2021-01-12T13:29:06.6654514Z }
2021-01-12T13:29:06.6654709Z MongoDB shell version v4.2.11
2021-01-12T13:29:06.6655333Z connecting to: mongodb://local-mongo-2-external:27017/test?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.6656048Z Implicit session: session { "id" : UUID("661af8e3-131f-46bb-bcf2-cfa9564d3901") }
2021-01-12T13:29:06.6656520Z MongoDB server version: 4.2.11
2021-01-12T13:29:06.6656734Z {
2021-01-12T13:29:06.6656964Z    "operationTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6657207Z    "ok" : 0,
2021-01-12T13:29:06.6657457Z    "errmsg" : "no replset config has been received",
2021-01-12T13:29:06.6657726Z    "code" : 94,
2021-01-12T13:29:06.6657955Z    "codeName" : "NotYetInitialized",
2021-01-12T13:29:06.6658214Z    "$clusterTime" : {
2021-01-12T13:29:06.6658473Z        "clusterTime" : Timestamp(0, 0),
2021-01-12T13:29:06.6658711Z        "signature" : {
2021-01-12T13:29:06.6658997Z            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
2021-01-12T13:29:06.6659277Z            "keyId" : NumberLong(0)
2021-01-12T13:29:06.6659499Z        }
2021-01-12T13:29:06.6659670Z    }
2021-01-12T13:29:06.6659826Z }
2021-01-12T13:29:06.6690418Z describing pod/local-mongo-1
2021-01-12T13:29:06.8523474Z Name:         local-mongo-1
2021-01-12T13:29:06.8524332Z Namespace:    default
2021-01-12T13:29:06.8524675Z Priority:     0
2021-01-12T13:29:06.8525238Z Node:         fv-az50-322/10.1.0.4
2021-01-12T13:29:06.8525901Z Start Time:   Tue, 12 Jan 2021 13:23:23 +0000
2021-01-12T13:29:06.8526779Z Labels:       app.kubernetes.io/component=mongodb
2021-01-12T13:29:06.8527492Z               app.kubernetes.io/instance=alloy-test
2021-01-12T13:29:06.8528257Z               app.kubernetes.io/managed-by=Helm
2021-01-12T13:29:06.8528661Z               app.kubernetes.io/name=mongodb
2021-01-12T13:29:06.8529866Z               controller-revision-hash=local-mongo-675554d77f
2021-01-12T13:29:06.8530810Z               helm.sh/chart=mongodb-10.3.1
2021-01-12T13:29:06.8531707Z               statefulset.kubernetes.io/pod-name=local-mongo-1
2021-01-12T13:29:06.8532555Z Annotations:  cni.projectcalico.org/podIP: 10.1.72.201/32
2021-01-12T13:29:06.8533299Z               cni.projectcalico.org/podIPs: 10.1.72.201/32
2021-01-12T13:29:06.8533666Z Status:       Running
2021-01-12T13:29:06.8533966Z IP:           10.1.72.201
2021-01-12T13:29:06.8534220Z IPs:
2021-01-12T13:29:06.8534477Z   IP:           10.1.72.201
2021-01-12T13:29:06.8535170Z Controlled By:  StatefulSet/local-mongo
2021-01-12T13:29:06.8535559Z Containers:
2021-01-12T13:29:06.8535804Z   mongodb:
2021-01-12T13:29:06.8536226Z     Container ID:  containerd://c5000e4203f916b022704bb8cf66a676b89b95b4c0c89a6660cd0421dd1f9fb3
2021-01-12T13:29:06.8536729Z     Image:         docker.io/bitnami/mongodb:4.2
2021-01-12T13:29:06.8537232Z     Image ID:      docker.io/bitnami/mongodb@sha256:d18fa4c7f5ab80cbd288e46c44b89682cc7139ca1d2a3f35cafcc4c9346a87e5
2021-01-12T13:29:06.8537732Z     Port:          27017/TCP
2021-01-12T13:29:06.8538059Z     Host Port:     0/TCP
2021-01-12T13:29:06.8538345Z     Command:
2021-01-12T13:29:06.8538631Z       /scripts/setup.sh
2021-01-12T13:29:06.8539177Z     State:          Running
2021-01-12T13:29:06.8539571Z       Started:      Tue, 12 Jan 2021 13:27:33 +0000
2021-01-12T13:29:06.8540096Z     Last State:     Terminated
2021-01-12T13:29:06.8540412Z       Reason:       Error
2021-01-12T13:29:06.8540713Z       Exit Code:    1
2021-01-12T13:29:06.8541073Z       Started:      Tue, 12 Jan 2021 13:23:28 +0000
2021-01-12T13:29:06.8541489Z       Finished:     Tue, 12 Jan 2021 13:27:33 +0000
2021-01-12T13:29:06.8541983Z     Ready:          True
2021-01-12T13:29:06.8542280Z     Restart Count:  1
2021-01-12T13:29:06.8542568Z     Limits:
2021-01-12T13:29:06.8542825Z       cpu:     100m
2021-01-12T13:29:06.8543229Z       memory:  400Mi
2021-01-12T13:29:06.8543652Z     Requests:
2021-01-12T13:29:06.8543919Z       cpu:      100m
2021-01-12T13:29:06.8544211Z       memory:   400Mi
2021-01-12T13:29:06.8544944Z     Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:06.8547010Z     Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:06.8547495Z     Environment:
2021-01-12T13:29:06.8547792Z       BITNAMI_DEBUG:                    true
2021-01-12T13:29:06.8548406Z       MY_POD_NAME:                      local-mongo-1 (v1:metadata.name)
2021-01-12T13:29:06.8548863Z       MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
2021-01-12T13:29:06.8549435Z       K8S_SERVICE_NAME:                 local-mongo-headless
2021-01-12T13:29:06.8550138Z       MONGODB_INITIAL_PRIMARY_HOST:     local-mongo-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
2021-01-12T13:29:06.8550564Z       MONGODB_REPLICA_SET_NAME:         rs1
2021-01-12T13:29:06.8551101Z       ALLOW_EMPTY_PASSWORD:             yes
2021-01-12T13:29:06.8551404Z       MONGODB_SYSTEM_LOG_VERBOSITY:     0
2021-01-12T13:29:06.8551891Z       MONGODB_DISABLE_SYSTEM_LOG:       no
2021-01-12T13:29:06.8552478Z       MONGODB_ENABLE_IPV6:              no
2021-01-12T13:29:06.8552925Z       MONGODB_ENABLE_DIRECTORY_PER_DB:  no
2021-01-12T13:29:06.8553554Z     Mounts:
2021-01-12T13:29:06.8553854Z       /bitnami/mongodb from datadir (rw)
2021-01-12T13:29:06.8554163Z       /scripts/setup.sh from scripts (rw,path="setup.sh")
2021-01-12T13:29:06.8554859Z       /var/run/secrets/kubernetes.io/serviceaccount from local-mongo-token-slsg7 (ro)
2021-01-12T13:29:06.8555223Z Conditions:
2021-01-12T13:29:06.8556008Z   Type              Status
2021-01-12T13:29:06.8556270Z   Initialized       True 
2021-01-12T13:29:06.8556587Z   Ready             True 
2021-01-12T13:29:06.8557156Z   ContainersReady   True 
2021-01-12T13:29:06.8557564Z   PodScheduled      True 
2021-01-12T13:29:06.8557932Z Volumes:
2021-01-12T13:29:06.8558132Z   datadir:
2021-01-12T13:29:06.8558492Z     Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
2021-01-12T13:29:06.8559188Z     ClaimName:  datadir-local-mongo-1
2021-01-12T13:29:06.8559572Z     ReadOnly:   false
2021-01-12T13:29:06.8559866Z   scripts:
2021-01-12T13:29:06.8560143Z     Type:      ConfigMap (a volume populated by a ConfigMap)
2021-01-12T13:29:06.8561056Z     Name:      local-mongo-scripts
2021-01-12T13:29:06.8561501Z     Optional:  false
2021-01-12T13:29:06.8562035Z   local-mongo-token-slsg7:
2021-01-12T13:29:06.8562471Z     Type:        Secret (a volume populated by a Secret)
2021-01-12T13:29:06.8563073Z     SecretName:  local-mongo-token-slsg7
2021-01-12T13:29:06.8563590Z     Optional:    false
2021-01-12T13:29:06.8563875Z QoS Class:       Guaranteed
2021-01-12T13:29:06.8564545Z Node-Selectors:  <none>
2021-01-12T13:29:06.8565162Z Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
2021-01-12T13:29:06.8565701Z                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
2021-01-12T13:29:06.8566097Z Events:
2021-01-12T13:29:06.8566442Z   Type     Reason            Age    From               Message
2021-01-12T13:29:06.8567087Z   ----     ------            ----   ----               -------
2021-01-12T13:29:06.8568182Z   Warning  FailedScheduling  6m1s   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:06.8569335Z   Warning  FailedScheduling  5m56s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:06.8570728Z   Warning  FailedScheduling  5m52s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:06.8571727Z   Normal   Scheduled         5m43s  default-scheduler  Successfully assigned default/local-mongo-1 to fv-az50-322
2021-01-12T13:29:06.8572418Z   Warning  Unhealthy         5m27s  kubelet            Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:06.8573064Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.8574321Z 2021-01-12T13:23:39.632+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:06.8575156Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:06.8575484Z @(connect):2:6
2021-01-12T13:29:06.8576153Z 2021-01-12T13:23:39.633+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:06.8576945Z 2021-01-12T13:23:39.633+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:06.8577507Z   Warning  Unhealthy  5m17s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:06.8578123Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.8579373Z 2021-01-12T13:23:49.837+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:06.8580167Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:06.8580515Z @(connect):2:6
2021-01-12T13:29:06.8581172Z 2021-01-12T13:23:49.931+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:06.8581953Z 2021-01-12T13:23:49.931+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:06.8582771Z   Warning  Unhealthy  5m7s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:06.8583335Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.8584489Z 2021-01-12T13:23:59.331+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:06.8585383Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:06.8585817Z @(connect):2:6
2021-01-12T13:29:06.8586411Z 2021-01-12T13:23:59.431+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:06.8587109Z 2021-01-12T13:23:59.431+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:06.8587635Z   Normal   Pulled     93s (x2 over 5m38s)  kubelet  Container image "docker.io/bitnami/mongodb:4.2" already present on machine
2021-01-12T13:29:06.8588181Z   Normal   Created    93s (x2 over 5m38s)  kubelet  Created container mongodb
2021-01-12T13:29:06.8588619Z   Normal   Started    93s (x2 over 5m38s)  kubelet  Started container mongodb
2021-01-12T13:29:06.8589097Z   Warning  Unhealthy  87s                  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:06.8589660Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.8590763Z 2021-01-12T13:27:39.634+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:06.8591584Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:06.8591909Z @(connect):2:6
2021-01-12T13:29:06.8592521Z 2021-01-12T13:27:39.731+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:06.8593674Z 2021-01-12T13:27:39.731+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:06.8594380Z   Warning  Unhealthy  77s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:06.8594968Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:06.8596315Z 2021-01-12T13:27:49.338+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:06.8597099Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:06.8600707Z @(connect):2:6
2021-01-12T13:29:06.8601411Z 2021-01-12T13:27:49.433+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:06.8602201Z 2021-01-12T13:27:49.433+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:06.8637713Z previous logs of pod/local-mongo-1
2021-01-12T13:29:07.0317397Z Advertised Hostname: 51.132.1.251
2021-01-12T13:29:07.0319224Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-12T13:29:07.0320383Z mongodb 13:23:30.93 
2021-01-12T13:29:07.0324083Z mongodb 13:23:30.94 Welcome to the Bitnami mongodb container
2021-01-12T13:29:07.0325186Z mongodb 13:23:30.94 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-12T13:29:07.0326979Z mongodb 13:23:30.94 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-12T13:29:07.0328253Z mongodb 13:23:30.94 
2021-01-12T13:29:07.0329090Z mongodb 13:23:30.95 INFO  ==> ** Starting MongoDB setup **
2021-01-12T13:29:07.0330148Z mongodb 13:23:31.13 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-12T13:29:07.0331344Z mongodb 13:23:31.15 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-12T13:29:07.0332541Z mongodb 13:23:31.25 INFO  ==> Initializing MongoDB...
2021-01-12T13:29:07.0333422Z mongodb 13:23:31.43 INFO  ==> Deploying MongoDB from scratch...
2021-01-12T13:29:07.0334319Z mongodb 13:23:31.53 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:07.0334894Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:07.0335267Z forked process: 43
2021-01-12T13:29:07.0335620Z child process started successfully, parent exiting
2021-01-12T13:29:07.0336337Z mongodb 13:23:45.55 INFO  ==> Creating users...
2021-01-12T13:29:07.0337152Z mongodb 13:23:45.55 INFO  ==> Users created
2021-01-12T13:29:07.0338133Z mongodb 13:23:45.73 INFO  ==> Configuring MongoDB replica set...
2021-01-12T13:29:07.0338972Z mongodb 13:23:45.74 INFO  ==> Stopping MongoDB...
2021-01-12T13:29:07.0340094Z mongodb 13:23:46.84 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:07.0340699Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:07.0341092Z forked process: 127
2021-01-12T13:29:07.0341458Z child process started successfully, parent exiting
2021-01-12T13:29:07.0342339Z mongodb 13:24:05.64 DEBUG ==> Waiting for primary node...
2021-01-12T13:29:07.0343298Z mongodb 13:24:05.64 DEBUG ==> Waiting for primary node...
2021-01-12T13:29:07.0344291Z mongodb 13:24:05.64 INFO  ==> Trying to connect to MongoDB server local-mongo-0.local-mongo-headless.default.svc.cluster.local...
2021-01-12T13:29:07.0346863Z mongodb 13:24:05.73 INFO  ==> Found MongoDB server listening at local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017 !
2021-01-12T13:29:07.0348748Z mongodb 13:27:32.23 ERROR ==> Node local-mongo-0.local-mongo-headless.default.svc.cluster.local did not become available
2021-01-12T13:29:07.0349823Z mongodb 13:27:32.24 INFO  ==> Stopping MongoDB...
2021-01-12T13:29:07.0361517Z logs of pod/local-mongo-1
2021-01-12T13:29:07.2006543Z Advertised Hostname: 51.132.1.251
2021-01-12T13:29:07.2007881Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-12T13:29:07.2009182Z mongodb 13:27:36.03 
2021-01-12T13:29:07.2010915Z mongodb 13:27:36.03 Welcome to the Bitnami mongodb container
2021-01-12T13:29:07.2012275Z mongodb 13:27:36.03 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-12T13:29:07.2013640Z mongodb 13:27:36.04 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-12T13:29:07.2014688Z mongodb 13:27:36.13 
2021-01-12T13:29:07.2015653Z mongodb 13:27:36.13 INFO  ==> ** Starting MongoDB setup **
2021-01-12T13:29:07.2016775Z mongodb 13:27:36.23 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-12T13:29:07.2018119Z mongodb 13:27:36.23 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-12T13:29:07.2019340Z mongodb 13:27:36.43 INFO  ==> Initializing MongoDB...
2021-01-12T13:29:07.2021089Z mongodb 13:27:36.54 INFO  ==> Deploying MongoDB with persisted data...
2021-01-12T13:29:07.2022940Z mongodb 13:27:36.73 DEBUG ==> Skipping loading custom scripts on non-primary nodes...
2021-01-12T13:29:07.2024203Z mongodb 13:27:36.83 INFO  ==> ** MongoDB setup finished! **
2021-01-12T13:29:07.2029178Z 
2021-01-12T13:29:07.2033156Z mongodb 13:27:36.94 INFO  ==> ** Starting MongoDB **
2021-01-12T13:29:07.2034028Z 2021-01-12T13:27:37.232+0000 I  CONTROL  [main] ***** SERVER RESTARTED *****
2021-01-12T13:29:07.2034926Z 2021-01-12T13:27:37.236+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2021-01-12T13:29:07.2035826Z 2021-01-12T13:27:37.237+0000 W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
2021-01-12T13:29:07.2036768Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/bitnami/mongodb/data/db 64-bit host=local-mongo-1
2021-01-12T13:29:07.2037621Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] db version v4.2.11
2021-01-12T13:29:07.2038667Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] git version: ea38428f0c6742c7c2c7f677e73d79e17a2aab96
2021-01-12T13:29:07.2039630Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
2021-01-12T13:29:07.2040539Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
2021-01-12T13:29:07.2041381Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] modules: none
2021-01-12T13:29:07.2042300Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] build environment:
2021-01-12T13:29:07.2043136Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten]     distmod: debian10
2021-01-12T13:29:07.2043957Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten]     distarch: x86_64
2021-01-12T13:29:07.2044923Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
2021-01-12T13:29:07.2046177Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] 400 MB of memory available to the process out of 6954 MB total system memory
2021-01-12T13:29:07.2049592Z 2021-01-12T13:27:37.238+0000 I  CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIp: "*", ipv6: false, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, replication: { enableMajorityReadConcern: true, replSetName: "rs1" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/opt/bitnami/mongodb/logs/mongodb.log", quiet: false, verbosity: 0 } }
2021-01-12T13:29:07.2052511Z 2021-01-12T13:27:37.239+0000 I  STORAGE  [initandlisten] Detected data files in /bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2021-01-12T13:29:07.2053642Z 2021-01-12T13:27:37.239+0000 I  STORAGE  [initandlisten] 
2021-01-12T13:29:07.2054643Z 2021-01-12T13:27:37.239+0000 I  STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-01-12T13:29:07.2056277Z 2021-01-12T13:27:37.239+0000 I  STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-01-12T13:29:07.2058227Z 2021-01-12T13:27:37.239+0000 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2021-01-12T13:29:07.2059951Z 2021-01-12T13:27:45.041+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458065:41279][1:0x7fd5be2ead40], txn-recover: Recovering log 2 through 3
2021-01-12T13:29:07.2061168Z 2021-01-12T13:27:45.544+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458065:544291][1:0x7fd5be2ead40], txn-recover: Recovering log 3 through 3
2021-01-12T13:29:07.2062403Z 2021-01-12T13:27:46.438+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458066:438369][1:0x7fd5be2ead40], txn-recover: Main recovery loop: starting at 2/25472 to 3/256
2021-01-12T13:29:07.2063602Z 2021-01-12T13:27:47.531+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458067:531769][1:0x7fd5be2ead40], txn-recover: Recovering log 2 through 3
2021-01-12T13:29:07.2064730Z 2021-01-12T13:27:48.040+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458068:40418][1:0x7fd5be2ead40], txn-recover: Recovering log 3 through 3
2021-01-12T13:29:07.2066235Z 2021-01-12T13:27:48.838+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458068:838948][1:0x7fd5be2ead40], txn-recover: Set global recovery timestamp: (0, 0)
2021-01-12T13:29:07.2067382Z 2021-01-12T13:27:49.039+0000 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2021-01-12T13:29:07.2068470Z 2021-01-12T13:27:49.136+0000 I  STORAGE  [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 0
2021-01-12T13:29:07.2069510Z 2021-01-12T13:27:49.139+0000 I  STORAGE  [initandlisten] Timestamp monitor starting
2021-01-12T13:29:07.2070236Z 2021-01-12T13:27:49.140+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:07.2071105Z 2021-01-12T13:27:49.140+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-01-12T13:29:07.2071994Z 2021-01-12T13:27:49.141+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-01-12T13:29:07.2073069Z 2021-01-12T13:27:49.141+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:07.2074249Z 2021-01-12T13:27:49.141+0000 I  CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 27673 processes, 65536 files. Number of processes should be at least 32768 : 0.5 times number of files.
2021-01-12T13:29:07.2075453Z 2021-01-12T13:27:49.141+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:07.2076382Z 2021-01-12T13:27:49.335+0000 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2021-01-12T13:29:07.2077367Z 2021-01-12T13:27:49.439+0000 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
2021-01-12T13:29:07.2078365Z 2021-01-12T13:27:49.439+0000 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2021-01-12T13:29:07.2079343Z 2021-01-12T13:27:49.439+0000 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2021-01-12T13:29:07.2080334Z 2021-01-12T13:27:49.443+0000 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2021-01-12T13:29:07.2081366Z 2021-01-12T13:27:49.443+0000 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/bitnami/mongodb/data/db/diagnostic.data'
2021-01-12T13:29:07.2082382Z 2021-01-12T13:27:49.445+0000 I  SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
2021-01-12T13:29:07.2083376Z 2021-01-12T13:27:49.445+0000 I  SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
2021-01-12T13:29:07.2084297Z 2021-01-12T13:27:49.532+0000 I  REPL     [initandlisten] Did not find local initialized voted for document at startup.
2021-01-12T13:29:07.2085144Z 2021-01-12T13:27:49.534+0000 I  REPL     [initandlisten] Rollback ID is 1
2021-01-12T13:29:07.2086219Z 2021-01-12T13:27:49.534+0000 I  REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2021-01-12T13:29:07.2087624Z 2021-01-12T13:27:49.535+0000 I  CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-01-12T13:29:07.2088875Z 2021-01-12T13:27:49.536+0000 I  SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
2021-01-12T13:29:07.2090031Z 2021-01-12T13:27:49.536+0000 I  CONTROL  [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
2021-01-12T13:29:07.2091058Z 2021-01-12T13:27:49.536+0000 I  NETWORK  [listener] Listening on /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2021-01-12T13:29:07.2091892Z 2021-01-12T13:27:49.536+0000 I  NETWORK  [listener] Listening on 0.0.0.0
2021-01-12T13:29:07.2092693Z 2021-01-12T13:27:49.536+0000 I  NETWORK  [listener] waiting for connections on port 27017
2021-01-12T13:29:07.2093591Z 2021-01-12T13:27:49.752+0000 I  NETWORK  [listener] connection accepted from 10.1.72.199:42880 #1 (1 connection now open)
2021-01-12T13:29:07.2095338Z 2021-01-12T13:27:49.752+0000 I  NETWORK  [conn1] received client metadata from 10.1.72.199:42880 conn1: { driver: { name: "mongo-csharp-driver", version: "2.10.4.0" }, os: { type: "Linux", name: "Linux 5.4.0-1032-azure #33~18.04.1-Ubuntu SMP Tue Nov 17 11:40:52 UTC 2020", architecture: "x86_64", version: "5.4.0-1032-azure" }, platform: ".NET Core 3.1.10" }
2021-01-12T13:29:07.2096852Z 2021-01-12T13:27:50.000+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
2021-01-12T13:29:07.2098131Z 2021-01-12T13:27:58.641+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:38952 #2 (2 connections now open)
2021-01-12T13:29:07.2099822Z 2021-01-12T13:27:58.641+0000 I  NETWORK  [conn2] received client metadata from 127.0.0.1:38952 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2101540Z 2021-01-12T13:27:58.740+0000 I  NETWORK  [conn2] end connection 127.0.0.1:38952 (1 connection now open)
2021-01-12T13:29:07.2102565Z 2021-01-12T13:28:08.643+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39030 #3 (2 connections now open)
2021-01-12T13:29:07.2104238Z 2021-01-12T13:28:08.643+0000 I  NETWORK  [conn3] received client metadata from 127.0.0.1:39030 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2138004Z 2021-01-12T13:28:08.737+0000 I  NETWORK  [conn3] end connection 127.0.0.1:39030 (1 connection now open)
2021-01-12T13:29:07.2139130Z 2021-01-12T13:28:11.443+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39060 #4 (2 connections now open)
2021-01-12T13:29:07.2140887Z 2021-01-12T13:28:11.531+0000 I  NETWORK  [conn4] received client metadata from 127.0.0.1:39060 conn4: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2142461Z 2021-01-12T13:28:11.537+0000 I  NETWORK  [conn4] end connection 127.0.0.1:39060 (1 connection now open)
2021-01-12T13:29:07.2143459Z 2021-01-12T13:28:18.638+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39122 #5 (2 connections now open)
2021-01-12T13:29:07.2145494Z 2021-01-12T13:28:18.638+0000 I  NETWORK  [conn5] received client metadata from 127.0.0.1:39122 conn5: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2147032Z 2021-01-12T13:28:18.744+0000 I  NETWORK  [conn5] end connection 127.0.0.1:39122 (1 connection now open)
2021-01-12T13:29:07.2148143Z 2021-01-12T13:28:21.438+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39168 #6 (2 connections now open)
2021-01-12T13:29:07.2150057Z 2021-01-12T13:28:21.438+0000 I  NETWORK  [conn6] received client metadata from 127.0.0.1:39168 conn6: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2151539Z 2021-01-12T13:28:21.538+0000 I  NETWORK  [conn6] end connection 127.0.0.1:39168 (1 connection now open)
2021-01-12T13:29:07.2152554Z 2021-01-12T13:28:28.640+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39284 #7 (2 connections now open)
2021-01-12T13:29:07.2154334Z 2021-01-12T13:28:28.640+0000 I  NETWORK  [conn7] received client metadata from 127.0.0.1:39284 conn7: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2156055Z 2021-01-12T13:28:28.734+0000 I  NETWORK  [conn7] end connection 127.0.0.1:39284 (1 connection now open)
2021-01-12T13:29:07.2157186Z 2021-01-12T13:28:31.532+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39334 #8 (2 connections now open)
2021-01-12T13:29:07.2158919Z 2021-01-12T13:28:31.533+0000 I  NETWORK  [conn8] received client metadata from 127.0.0.1:39334 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2160534Z 2021-01-12T13:28:31.550+0000 I  NETWORK  [conn8] end connection 127.0.0.1:39334 (1 connection now open)
2021-01-12T13:29:07.2161614Z 2021-01-12T13:28:33.260+0000 I  NETWORK  [conn1] end connection 10.1.72.199:42880 (0 connections now open)
2021-01-12T13:29:07.2162720Z 2021-01-12T13:28:38.639+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39416 #9 (1 connection now open)
2021-01-12T13:29:07.2164294Z 2021-01-12T13:28:38.639+0000 I  NETWORK  [conn9] received client metadata from 127.0.0.1:39416 conn9: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2165955Z 2021-01-12T13:28:38.736+0000 I  NETWORK  [conn9] end connection 127.0.0.1:39416 (0 connections now open)
2021-01-12T13:29:07.2166951Z 2021-01-12T13:28:41.440+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39442 #10 (1 connection now open)
2021-01-12T13:29:07.2168848Z 2021-01-12T13:28:41.440+0000 I  NETWORK  [conn10] received client metadata from 127.0.0.1:39442 conn10: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2170200Z 2021-01-12T13:28:41.537+0000 I  NETWORK  [conn10] end connection 127.0.0.1:39442 (0 connections now open)
2021-01-12T13:29:07.2171141Z 2021-01-12T13:28:48.732+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39504 #11 (1 connection now open)
2021-01-12T13:29:07.2172714Z 2021-01-12T13:28:48.732+0000 I  NETWORK  [conn11] received client metadata from 127.0.0.1:39504 conn11: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2174258Z 2021-01-12T13:28:48.737+0000 I  NETWORK  [conn11] end connection 127.0.0.1:39504 (0 connections now open)
2021-01-12T13:29:07.2175372Z 2021-01-12T13:28:51.533+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39524 #12 (1 connection now open)
2021-01-12T13:29:07.2177037Z 2021-01-12T13:28:51.536+0000 I  NETWORK  [conn12] received client metadata from 127.0.0.1:39524 conn12: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2178589Z 2021-01-12T13:28:51.545+0000 I  NETWORK  [conn12] end connection 127.0.0.1:39524 (0 connections now open)
2021-01-12T13:29:07.2179554Z 2021-01-12T13:28:58.636+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39586 #13 (1 connection now open)
2021-01-12T13:29:07.2181309Z 2021-01-12T13:28:58.636+0000 I  NETWORK  [conn13] received client metadata from 127.0.0.1:39586 conn13: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2182617Z 2021-01-12T13:28:58.733+0000 I  NETWORK  [conn13] end connection 127.0.0.1:39586 (0 connections now open)
2021-01-12T13:29:07.2183474Z 2021-01-12T13:29:01.535+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39614 #14 (1 connection now open)
2021-01-12T13:29:07.2185249Z 2021-01-12T13:29:01.536+0000 I  NETWORK  [conn14] received client metadata from 127.0.0.1:39614 conn14: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:07.2186868Z 2021-01-12T13:29:01.631+0000 I  NETWORK  [conn14] end connection 127.0.0.1:39614 (0 connections now open)
2021-01-12T13:29:07.7466232Z describing pod/local-mongo-2
2021-01-12T13:29:07.9281787Z Name:         local-mongo-2
2021-01-12T13:29:07.9282792Z Namespace:    default
2021-01-12T13:29:07.9283260Z Priority:     0
2021-01-12T13:29:07.9283986Z Node:         fv-az50-322/10.1.0.4
2021-01-12T13:29:07.9284606Z Start Time:   Tue, 12 Jan 2021 13:24:11 +0000
2021-01-12T13:29:07.9285134Z Labels:       app.kubernetes.io/component=mongodb
2021-01-12T13:29:07.9285889Z               app.kubernetes.io/instance=alloy-test
2021-01-12T13:29:07.9286747Z               app.kubernetes.io/managed-by=Helm
2021-01-12T13:29:07.9287326Z               app.kubernetes.io/name=mongodb
2021-01-12T13:29:07.9288096Z               controller-revision-hash=local-mongo-675554d77f
2021-01-12T13:29:07.9288980Z               helm.sh/chart=mongodb-10.3.1
2021-01-12T13:29:07.9289855Z               statefulset.kubernetes.io/pod-name=local-mongo-2
2021-01-12T13:29:07.9290508Z Annotations:  cni.projectcalico.org/podIP: 10.1.72.202/32
2021-01-12T13:29:07.9291057Z               cni.projectcalico.org/podIPs: 10.1.72.202/32
2021-01-12T13:29:07.9291527Z Status:       Running
2021-01-12T13:29:07.9291941Z IP:           10.1.72.202
2021-01-12T13:29:07.9292342Z IPs:
2021-01-12T13:29:07.9292643Z   IP:           10.1.72.202
2021-01-12T13:29:07.9293324Z Controlled By:  StatefulSet/local-mongo
2021-01-12T13:29:07.9294770Z Containers:
2021-01-12T13:29:07.9295284Z   mongodb:
2021-01-12T13:29:07.9295852Z     Container ID:  containerd://778ecac9860d9a72d8d77c3d508c25aea14944e35aac8c9e312ca2018ae63e8c
2021-01-12T13:29:07.9296503Z     Image:         docker.io/bitnami/mongodb:4.2
2021-01-12T13:29:07.9297162Z     Image ID:      docker.io/bitnami/mongodb@sha256:d18fa4c7f5ab80cbd288e46c44b89682cc7139ca1d2a3f35cafcc4c9346a87e5
2021-01-12T13:29:07.9298166Z     Port:          27017/TCP
2021-01-12T13:29:07.9298648Z     Host Port:     0/TCP
2021-01-12T13:29:07.9299257Z     Command:
2021-01-12T13:29:07.9299672Z       /scripts/setup.sh
2021-01-12T13:29:07.9300419Z     State:          Running
2021-01-12T13:29:07.9300905Z       Started:      Tue, 12 Jan 2021 13:28:20 +0000
2021-01-12T13:29:07.9301414Z     Last State:     Terminated
2021-01-12T13:29:07.9301898Z       Reason:       Error
2021-01-12T13:29:07.9302327Z       Exit Code:    1
2021-01-12T13:29:07.9302816Z       Started:      Tue, 12 Jan 2021 13:24:13 +0000
2021-01-12T13:29:07.9303399Z       Finished:     Tue, 12 Jan 2021 13:28:20 +0000
2021-01-12T13:29:07.9303866Z     Ready:          True
2021-01-12T13:29:07.9304301Z     Restart Count:  1
2021-01-12T13:29:07.9304674Z     Limits:
2021-01-12T13:29:07.9306056Z       cpu:     100m
2021-01-12T13:29:07.9306577Z       memory:  400Mi
2021-01-12T13:29:07.9307006Z     Requests:
2021-01-12T13:29:07.9307404Z       cpu:      100m
2021-01-12T13:29:07.9307842Z       memory:   400Mi
2021-01-12T13:29:07.9308965Z     Liveness:   exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:07.9310142Z     Readiness:  exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
2021-01-12T13:29:07.9310849Z     Environment:
2021-01-12T13:29:07.9311471Z       BITNAMI_DEBUG:                    true
2021-01-12T13:29:07.9312289Z       MY_POD_NAME:                      local-mongo-2 (v1:metadata.name)
2021-01-12T13:29:07.9313023Z       MY_POD_NAMESPACE:                 default (v1:metadata.namespace)
2021-01-12T13:29:07.9313883Z       K8S_SERVICE_NAME:                 local-mongo-headless
2021-01-12T13:29:07.9314914Z       MONGODB_INITIAL_PRIMARY_HOST:     local-mongo-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
2021-01-12T13:29:07.9315641Z       MONGODB_REPLICA_SET_NAME:         rs1
2021-01-12T13:29:07.9316153Z       ALLOW_EMPTY_PASSWORD:             yes
2021-01-12T13:29:07.9316815Z       MONGODB_SYSTEM_LOG_VERBOSITY:     0
2021-01-12T13:29:07.9317331Z       MONGODB_DISABLE_SYSTEM_LOG:       no
2021-01-12T13:29:07.9318154Z       MONGODB_ENABLE_IPV6:              no
2021-01-12T13:29:07.9318677Z       MONGODB_ENABLE_DIRECTORY_PER_DB:  no
2021-01-12T13:29:07.9319156Z     Mounts:
2021-01-12T13:29:07.9319610Z       /bitnami/mongodb from datadir (rw)
2021-01-12T13:29:07.9320107Z       /scripts/setup.sh from scripts (rw,path="setup.sh")
2021-01-12T13:29:07.9320919Z       /var/run/secrets/kubernetes.io/serviceaccount from local-mongo-token-slsg7 (ro)
2021-01-12T13:29:07.9321704Z Conditions:
2021-01-12T13:29:07.9322118Z   Type              Status
2021-01-12T13:29:07.9322557Z   Initialized       True 
2021-01-12T13:29:07.9323139Z   Ready             True 
2021-01-12T13:29:07.9323572Z   ContainersReady   True 
2021-01-12T13:29:07.9324197Z   PodScheduled      True 
2021-01-12T13:29:07.9325031Z Volumes:
2021-01-12T13:29:07.9325356Z   datadir:
2021-01-12T13:29:07.9326590Z     Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
2021-01-12T13:29:07.9327501Z     ClaimName:  datadir-local-mongo-2
2021-01-12T13:29:07.9327980Z     ReadOnly:   false
2021-01-12T13:29:07.9328433Z   scripts:
2021-01-12T13:29:07.9328795Z     Type:      ConfigMap (a volume populated by a ConfigMap)
2021-01-12T13:29:07.9329551Z     Name:      local-mongo-scripts
2021-01-12T13:29:07.9330123Z     Optional:  false
2021-01-12T13:29:07.9330815Z   local-mongo-token-slsg7:
2021-01-12T13:29:07.9331439Z     Type:        Secret (a volume populated by a Secret)
2021-01-12T13:29:07.9332226Z     SecretName:  local-mongo-token-slsg7
2021-01-12T13:29:07.9332797Z     Optional:    false
2021-01-12T13:29:07.9333259Z QoS Class:       Guaranteed
2021-01-12T13:29:07.9333855Z Node-Selectors:  <none>
2021-01-12T13:29:07.9334749Z Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
2021-01-12T13:29:07.9335481Z                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
2021-01-12T13:29:07.9336204Z Events:
2021-01-12T13:29:07.9336813Z   Type     Reason            Age    From               Message
2021-01-12T13:29:07.9337819Z   ----     ------            ----   ----               -------
2021-01-12T13:29:07.9338940Z   Warning  FailedScheduling  4m59s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:07.9340152Z   Warning  FailedScheduling  4m59s  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-01-12T13:29:07.9341480Z   Normal   Scheduled         4m56s  default-scheduler  Successfully assigned default/local-mongo-2 to fv-az50-322
2021-01-12T13:29:07.9342358Z   Warning  Unhealthy         4m45s  kubelet            Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:07.9358713Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:07.9360239Z 2021-01-12T13:24:22.532+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:07.9361288Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:07.9361778Z @(connect):2:6
2021-01-12T13:29:07.9362641Z 2021-01-12T13:24:22.631+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:07.9363660Z 2021-01-12T13:24:22.631+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:07.9364427Z   Warning  Unhealthy  4m36s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:07.9365171Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:07.9366558Z 2021-01-12T13:24:31.538+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:07.9367568Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:07.9368051Z @(connect):2:6
2021-01-12T13:29:07.9368935Z 2021-01-12T13:24:31.542+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:07.9369938Z 2021-01-12T13:24:31.542+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:07.9386391Z   Warning  Unhealthy  4m25s  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:07.9387120Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:07.9388630Z 2021-01-12T13:24:42.236+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:07.9389484Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:07.9389848Z @(connect):2:6
2021-01-12T13:29:07.9390544Z 2021-01-12T13:24:42.333+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:07.9391395Z 2021-01-12T13:24:42.333+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:07.9391992Z   Warning  Unhealthy  48s  kubelet  Liveness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:07.9392625Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:07.9393729Z 2021-01-12T13:28:19.633+0000 I  NETWORK  [js] DBClientConnection failed to receive message from 127.0.0.1:27017 - HostUnreachable: Connection reset by peer
2021-01-12T13:29:07.9395119Z 2021-01-12T13:28:19.633+0000 E  QUERY    [js] Error: network error while attempting to run command 'isMaster' on host '127.0.0.1:27017'  :
2021-01-12T13:29:07.9395804Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:07.9396173Z @(connect):2:6
2021-01-12T13:29:07.9396874Z 2021-01-12T13:28:19.635+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:07.9397866Z 2021-01-12T13:28:19.635+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:07.9398514Z   Normal   Pulled     47s (x2 over 4m54s)  kubelet  Container image "docker.io/bitnami/mongodb:4.2" already present on machine
2021-01-12T13:29:07.9399246Z   Normal   Created    47s (x2 over 4m54s)  kubelet  Created container mongodb
2021-01-12T13:29:07.9399853Z   Normal   Started    47s (x2 over 4m54s)  kubelet  Started container mongodb
2021-01-12T13:29:07.9400421Z   Warning  Unhealthy  35s                  kubelet  Readiness probe failed: MongoDB shell version v4.2.11
2021-01-12T13:29:07.9401052Z connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2021-01-12T13:29:07.9402497Z 2021-01-12T13:28:32.237+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
2021-01-12T13:29:07.9403361Z connect@src/mongo/shell/mongo.js:353:17
2021-01-12T13:29:07.9403711Z @(connect):2:6
2021-01-12T13:29:07.9404440Z 2021-01-12T13:28:32.331+0000 F  -        [main] exception: connect failed
2021-01-12T13:29:07.9405791Z 2021-01-12T13:28:32.332+0000 E  -        [main] exiting with code 1
2021-01-12T13:29:07.9410495Z previous logs of pod/local-mongo-2
2021-01-12T13:29:08.1282778Z Advertised Hostname: 51.132.1.251
2021-01-12T13:29:08.1284419Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-12T13:29:08.1285343Z mongodb 13:24:15.83 
2021-01-12T13:29:08.1286245Z mongodb 13:24:15.83 Welcome to the Bitnami mongodb container
2021-01-12T13:29:08.1287328Z mongodb 13:24:15.84 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-12T13:29:08.1288471Z mongodb 13:24:15.84 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-12T13:29:08.1289396Z mongodb 13:24:15.84 
2021-01-12T13:29:08.1290278Z mongodb 13:24:15.93 INFO  ==> ** Starting MongoDB setup **
2021-01-12T13:29:08.1291325Z mongodb 13:24:16.03 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-12T13:29:08.1292525Z mongodb 13:24:16.03 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-12T13:29:08.1293563Z mongodb 13:24:16.23 INFO  ==> Initializing MongoDB...
2021-01-12T13:29:08.1294533Z mongodb 13:24:16.34 INFO  ==> Deploying MongoDB from scratch...
2021-01-12T13:29:08.1295501Z mongodb 13:24:16.43 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:08.1296182Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:08.1296639Z forked process: 43
2021-01-12T13:29:08.1297042Z child process started successfully, parent exiting
2021-01-12T13:29:08.1297747Z mongodb 13:24:30.44 INFO  ==> Creating users...
2021-01-12T13:29:08.1298651Z mongodb 13:24:30.45 INFO  ==> Users created
2021-01-12T13:29:08.1299600Z mongodb 13:24:30.63 INFO  ==> Configuring MongoDB replica set...
2021-01-12T13:29:08.1300525Z mongodb 13:24:30.73 INFO  ==> Stopping MongoDB...
2021-01-12T13:29:08.1301439Z mongodb 13:24:31.74 DEBUG ==> Starting MongoDB in background...
2021-01-12T13:29:08.1302073Z about to fork child process, waiting until server is ready for connections.
2021-01-12T13:29:08.1302507Z forked process: 133
2021-01-12T13:29:08.1302843Z child process started successfully, parent exiting
2021-01-12T13:29:08.1303667Z mongodb 13:24:50.14 DEBUG ==> Waiting for primary node...
2021-01-12T13:29:08.1306175Z mongodb 13:24:50.14 DEBUG ==> Waiting for primary node...
2021-01-12T13:29:08.1307797Z mongodb 13:24:50.14 INFO  ==> Trying to connect to MongoDB server local-mongo-0.local-mongo-headless.default.svc.cluster.local...
2021-01-12T13:29:08.1309300Z mongodb 13:24:50.23 INFO  ==> Found MongoDB server listening at local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017 !
2021-01-12T13:29:08.1310545Z mongodb 13:28:18.93 ERROR ==> Node local-mongo-0.local-mongo-headless.default.svc.cluster.local did not become available
2021-01-12T13:29:08.1311570Z mongodb 13:28:19.13 INFO  ==> Stopping MongoDB...
2021-01-12T13:29:08.1386961Z logs of pod/local-mongo-2
2021-01-12T13:29:08.3075344Z Advertised Hostname: 51.132.1.251
2021-01-12T13:29:08.3077533Z Pod name doesn't match initial primary pod name, configuring node as a secondary
2021-01-12T13:29:08.3080815Z mongodb 13:28:21.03 
2021-01-12T13:29:08.3083464Z mongodb 13:28:21.04 Welcome to the Bitnami mongodb container
2021-01-12T13:29:08.3084939Z mongodb 13:28:21.04 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
2021-01-12T13:29:08.3086236Z mongodb 13:28:21.13 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
2021-01-12T13:29:08.3087165Z mongodb 13:28:21.14 
2021-01-12T13:29:08.3087984Z mongodb 13:28:21.15 INFO  ==> ** Starting MongoDB setup **
2021-01-12T13:29:08.3088776Z mongodb 13:28:21.33 INFO  ==> Validating settings in MONGODB_* env vars...
2021-01-12T13:29:08.3089826Z mongodb 13:28:21.33 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-01-12T13:29:08.3090769Z mongodb 13:28:21.44 INFO  ==> Initializing MongoDB...
2021-01-12T13:29:08.3091841Z mongodb 13:28:21.63 INFO  ==> Deploying MongoDB with persisted data...
2021-01-12T13:29:08.3097397Z mongodb 13:28:21.83 DEBUG ==> Skipping loading custom scripts on non-primary nodes...
2021-01-12T13:29:08.3098369Z mongodb 13:28:21.83 INFO  ==> ** MongoDB setup finished! **
2021-01-12T13:29:08.3098790Z 
2021-01-12T13:29:08.3099453Z mongodb 13:28:22.03 INFO  ==> ** Starting MongoDB **
2021-01-12T13:29:08.3100263Z 2021-01-12T13:28:22.238+0000 I  CONTROL  [main] ***** SERVER RESTARTED *****
2021-01-12T13:29:08.3101213Z 2021-01-12T13:28:22.239+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2021-01-12T13:29:08.3102699Z 2021-01-12T13:28:22.332+0000 W  ASIO     [main] No TransportLayer configured during NetworkInterface startup
2021-01-12T13:29:08.3103844Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/bitnami/mongodb/data/db 64-bit host=local-mongo-2
2021-01-12T13:29:08.3104814Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] db version v4.2.11
2021-01-12T13:29:08.3106343Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] git version: ea38428f0c6742c7c2c7f677e73d79e17a2aab96
2021-01-12T13:29:08.3107888Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
2021-01-12T13:29:08.3108814Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
2021-01-12T13:29:08.3109650Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] modules: none
2021-01-12T13:29:08.3110465Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] build environment:
2021-01-12T13:29:08.3111314Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten]     distmod: debian10
2021-01-12T13:29:08.3112410Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten]     distarch: x86_64
2021-01-12T13:29:08.3113399Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
2021-01-12T13:29:08.3114374Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] 400 MB of memory available to the process out of 6954 MB total system memory
2021-01-12T13:29:08.3118846Z 2021-01-12T13:28:22.333+0000 I  CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIp: "*", ipv6: false, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, replication: { enableMajorityReadConcern: true, replSetName: "rs1" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/bitnami/mongodb/data/db", directoryPerDB: false, journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/opt/bitnami/mongodb/logs/mongodb.log", quiet: false, verbosity: 0 } }
2021-01-12T13:29:08.3124028Z 2021-01-12T13:28:22.334+0000 I  STORAGE  [initandlisten] Detected data files in /bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2021-01-12T13:29:08.3125135Z 2021-01-12T13:28:22.334+0000 I  STORAGE  [initandlisten] 
2021-01-12T13:29:08.3126143Z 2021-01-12T13:28:22.334+0000 I  STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-01-12T13:29:08.3127256Z 2021-01-12T13:28:22.334+0000 I  STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-01-12T13:29:08.3129110Z 2021-01-12T13:28:22.334+0000 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2021-01-12T13:29:08.3133259Z 2021-01-12T13:28:29.436+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458109:436484][1:0x7f118a4a4d40], txn-recover: Recovering log 2 through 3
2021-01-12T13:29:08.3134538Z 2021-01-12T13:28:29.937+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458109:937773][1:0x7f118a4a4d40], txn-recover: Recovering log 3 through 3
2021-01-12T13:29:08.3135856Z 2021-01-12T13:28:30.835+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458110:835500][1:0x7f118a4a4d40], txn-recover: Main recovery loop: starting at 2/25344 to 3/256
2021-01-12T13:29:08.3137129Z 2021-01-12T13:28:32.541+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458112:541026][1:0x7f118a4a4d40], txn-recover: Recovering log 2 through 3
2021-01-12T13:29:08.3138382Z 2021-01-12T13:28:33.136+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458113:136546][1:0x7f118a4a4d40], txn-recover: Recovering log 3 through 3
2021-01-12T13:29:08.3139667Z 2021-01-12T13:28:33.635+0000 I  STORAGE  [initandlisten] WiredTiger message [1610458113:634995][1:0x7f118a4a4d40], txn-recover: Set global recovery timestamp: (0, 0)
2021-01-12T13:29:08.3143017Z 2021-01-12T13:28:33.932+0000 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2021-01-12T13:29:08.3176827Z 2021-01-12T13:28:33.935+0000 I  STORAGE  [initandlisten] No table logging settings modifications are required for existing WiredTiger tables. Logging enabled? 0
2021-01-12T13:29:08.3178020Z 2021-01-12T13:28:33.936+0000 I  STORAGE  [initandlisten] Timestamp monitor starting
2021-01-12T13:29:08.3178811Z 2021-01-12T13:28:33.940+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:08.3179738Z 2021-01-12T13:28:33.940+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-01-12T13:29:08.3180884Z 2021-01-12T13:28:33.940+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-01-12T13:29:08.3181801Z 2021-01-12T13:28:33.940+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:08.3182982Z 2021-01-12T13:28:33.941+0000 I  CONTROL  [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 27673 processes, 65536 files. Number of processes should be at least 32768 : 0.5 times number of files.
2021-01-12T13:29:08.3184056Z 2021-01-12T13:28:33.941+0000 I  CONTROL  [initandlisten] 
2021-01-12T13:29:08.3184997Z 2021-01-12T13:28:33.944+0000 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2021-01-12T13:29:08.3186230Z 2021-01-12T13:28:34.031+0000 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
2021-01-12T13:29:08.3187258Z 2021-01-12T13:28:34.032+0000 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2021-01-12T13:29:08.3188365Z 2021-01-12T13:28:34.032+0000 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2021-01-12T13:29:08.3189445Z 2021-01-12T13:28:34.034+0000 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2021-01-12T13:29:08.3190563Z 2021-01-12T13:28:34.034+0000 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/bitnami/mongodb/data/db/diagnostic.data'
2021-01-12T13:29:08.3191676Z 2021-01-12T13:28:34.036+0000 I  SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
2021-01-12T13:29:08.3193015Z 2021-01-12T13:28:34.036+0000 I  SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
2021-01-12T13:29:08.3194057Z 2021-01-12T13:28:34.037+0000 I  REPL     [initandlisten] Did not find local initialized voted for document at startup.
2021-01-12T13:29:08.3194916Z 2021-01-12T13:28:34.041+0000 I  REPL     [initandlisten] Rollback ID is 1
2021-01-12T13:29:08.3196062Z 2021-01-12T13:28:34.042+0000 I  REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2021-01-12T13:29:08.3197481Z 2021-01-12T13:28:34.043+0000 I  CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2021-01-12T13:29:08.3198745Z 2021-01-12T13:28:34.043+0000 I  SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
2021-01-12T13:29:08.3199948Z 2021-01-12T13:28:34.044+0000 I  CONTROL  [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
2021-01-12T13:29:08.3202512Z 2021-01-12T13:28:34.044+0000 I  NETWORK  [listener] Listening on /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2021-01-12T13:29:08.3203306Z 2021-01-12T13:28:34.131+0000 I  NETWORK  [listener] Listening on 0.0.0.0
2021-01-12T13:29:08.3204031Z 2021-01-12T13:28:34.131+0000 I  NETWORK  [listener] waiting for connections on port 27017
2021-01-12T13:29:08.3206450Z 2021-01-12T13:28:35.001+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
2021-01-12T13:29:08.3207632Z 2021-01-12T13:28:41.440+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39444 #1 (1 connection now open)
2021-01-12T13:29:08.3209832Z 2021-01-12T13:28:41.440+0000 I  NETWORK  [conn1] received client metadata from 127.0.0.1:39444 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:08.3211382Z 2021-01-12T13:28:41.535+0000 I  NETWORK  [conn1] end connection 127.0.0.1:39444 (0 connections now open)
2021-01-12T13:29:08.3212553Z 2021-01-12T13:28:51.535+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39526 #2 (1 connection now open)
2021-01-12T13:29:08.3214471Z 2021-01-12T13:28:51.536+0000 I  NETWORK  [conn2] received client metadata from 127.0.0.1:39526 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:08.3216090Z 2021-01-12T13:28:51.548+0000 I  NETWORK  [conn2] end connection 127.0.0.1:39526 (0 connections now open)
2021-01-12T13:29:08.3217104Z 2021-01-12T13:28:59.337+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39596 #3 (1 connection now open)
2021-01-12T13:29:08.3218802Z 2021-01-12T13:28:59.337+0000 I  NETWORK  [conn3] received client metadata from 127.0.0.1:39596 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:08.3220289Z 2021-01-12T13:28:59.434+0000 I  NETWORK  [conn3] end connection 127.0.0.1:39596 (0 connections now open)
2021-01-12T13:29:08.3221421Z 2021-01-12T13:29:01.535+0000 I  NETWORK  [listener] connection accepted from 127.0.0.1:39612 #4 (1 connection now open)
2021-01-12T13:29:08.3223211Z 2021-01-12T13:29:01.535+0000 I  NETWORK  [conn4] received client metadata from 127.0.0.1:39612 conn4: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.11" }, os: { type: "Linux", name: "PRETTY_NAME="Debian GNU/Linux 10 (buster)"", architecture: "x86_64", version: "Kernel 5.4.0-1032-azure" } }
2021-01-12T13:29:08.3224690Z 2021-01-12T13:29:01.543+0000 I  NETWORK  [conn4] end connection 127.0.0.1:39612 (0 connections now open)
dave-yotta commented 3 years ago

Also just double checking - you've said you're installing with helm install my-mongo --set mongodb.architecture="replicaset" bitnami/mongodb? This doesn't actually install a replicaset when I've run it:

> helm get values my-mongo
USER-SUPPLIED VALUES:
mongodb:
  architecture: replicaset
> kubectl get all
NAME                                    READY   STATUS    RESTARTS   AGE
pod/my-mongo-mongodb-85684588c4-rm5wr   1/1     Running   0          4m3s

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP     24d
service/my-mongo-mongodb   ClusterIP   10.111.88.177   <none>        27017/TCP   4m3s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-mongo-mongodb   1/1     1            1           4m3s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/my-mongo-mongodb-85684588c4   1         1         1       4m3s

I posted my config above but I don't see any typo in your values - maybe it's missing something?

dave-yotta commented 3 years ago

I've tried this with kind also - primary stuck at mongodb 09:54:04.53 INFO ==> Configuring MongoDB primary node for 5min or so. I'm trying to add more logging to the image to get the error from the mongo initiate call if there is one - there's nothing that would prevent echo from working in these shell scripts no?

dave-yotta commented 3 years ago

Ok - after putting some logs in mongodb_is_primary_node_initiated I can see it's stuck getting this error:

{
    "operationTime" : Timestamp(0, 0),
    "ok" : 0,
    "errmsg" : "No host described in new configuration with {version: 1, term: 0} for replica set rs1 maps to this node",
    "code" : 93,
    "codeName" : "InvalidReplicaSetConfig",
    "$clusterTime" : {
        "clusterTime" : Timestamp(0, 0),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
dave-yotta commented 3 years ago

More logs - It's initating with mongodb 12:05:51.57 WARN ==> initating with: rs.initiate({"_id":"rs1", "members":[{"_id":0,"host":"51.132.57.214:27017","priority":5}]})" instead of local-mongo-0-external:27017 - any ideas? I'll keep digging :/ looks like MONGODB_ADVERTISED_HOSTNAME isn't set? get_machine_ip strange that it can't resolve itself?

dave-yotta commented 3 years ago

Ok so this isn't set because I've got externalAccess enabled. Can I ask how this works? I've set it up as a nodeport, however due to mongodb advertising of replica set info - when you connect over one dns route (e.g. my-external-access:32001) mongod replies with the replica set config and forces you to connect over this. So you end up needing split-horizon dns in the cluster (not 100% how this works) to resolve this issue with external access. I don't see how using the cluster IP helps in this scenario?

Could you explain how externalAccess is supposed to work? I was actually never able to connect over the nodeport due to the mongodb replica design i've mentioned above.

dave-yotta commented 3 years ago

Yet without external-access I cannot form a replica-set connection string like mongodb://mongo-0:27017,mongo-1:27017,mongo-2:27017 to connect to the replica set even within the cluster? There is service/local-mongo-headless but that only points to one node?

dave-yotta commented 3 years ago

Ok I've changed the title and updated the description - if you manage to resolve these seemingly basic problems even without external access please let me know, otherwise I'm just rolling my own chart using the offical mongo image which at this point will be far less convoluted. There's a lot of gotchas in here and I wouldn't advise anyone use bitnami apps at the moment because a lot of these problems could be missed until it's too late - e.g. around transcations there. Sorry to be on a downer :(

dave-yotta commented 3 years ago

Reading the statefulset docs on kubernetes you get a stable network id - replica set seems to get configured with local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017 - bit of a mouthful. Maybe you could consider adding this to the readme under "how to connect"?

mongo --verbose mongodb://local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017/?replicaSet=rs1
MongoDB shell version v4.2.11
connecting to: mongodb://local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs1
2021-01-14T13:56:16.751+0000 D1 NETWORK  [js] Starting up task executor for monitoring replica sets in response to request to monitor set: rs1/local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017
2021-01-14T13:56:16.751+0000 I  NETWORK  [js] Starting new replica set monitor for rs1/local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017
2021-01-14T13:56:16.754+0000 I  CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017
2021-01-14T13:56:16.754+0000 I  CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017
2021-01-14T13:56:16.754+0000 I  CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017
2021-01-14T13:56:16.757+0000 I  NETWORK  [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for rs1 is rs1/local-mongo-0.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-1.local-mongo-headless.default.svc.cluster.local:27017,local-mongo-2.local-mongo-headless.default.svc.cluster.local:27017

I don't know what to say about the externalAccess - choosing the ip of the node is never going to work unless perhaps the cluster is on the host network somehow - but you do document this and mongo made itself (i think unnecessarily) difficult to configure the networking of a replica set in the first place.

franklin432 commented 3 years ago

So i'm a bitnami/mongodb user and I noticed your question. There are a few things wrong here that will need to be addressed in order for your question to resolve.

  1. This chart has two values.yaml file. The values.yaml file and the values-production.yaml file. By default if you do a helm install without specifying -f , the chart will automatically install the values.yaml one which by default is also a standalone. 1a. Since I'm trying to use bitnami/mongodb for production type stuff I typically never use the values.yaml file and instead helm install with specifying -f values-production. I suggest you use that since there are other production related parameters enabled there that are not in values.yaml. values.yaml is more so used for me for dev or testing.
  2. @carrodher your command is wrong and thats why @dave-yotta was only able to get a standalone to produce and not a replicaset. You were using:
    helm install my-mongo --set mongodb.architecture="replicaset" bitnami/mongodb

    which is incorrect, therefore it was helm installing the default parameters from the values.yaml file which is a standalone.

It should instead be:

helm install my-mongo --set architecture="replicaset" bitnami/mongodb

This will produce a replicaset which is basing its values from the values.yaml. If you open that file you will see if architecture is equal to standalone, it ignores the replicaCount in the chart and give you just a standalone. mongdb.architecure was set wrong therefore it was giving @carrodher a standalone every time.

  1. @carrodher I noticed in your values.yaml file you have auth.enabled set to false. Why is that? That should always be enabled in my opinion at the minimum.
  2. The bitnami charts automatically does the initialization of a replica set to contain the primary and secondaries so why are you using a job to create that. I have used it plenty of times and can confirm it does this.
  3. I also use NodePorts in my environment and not LoadBalancers. I am able to connect to the cluster using an external mongo shell with no problems also. I specify my username,pass, primary hostname, primary node port, and authentcationDatabase or I put into the string all 3 hostnames and node ports and it connects to the primary.
dave-yotta commented 3 years ago

@franklin432 thanks - yes that's is.

1,2) It's because I've got the bitnami/mongodb chart as a dependency of my chart which involves other services (and in this case it's tests too). I posted my config fragment I put in my own values.yaml - which overrides those coming from the mongodb subchart. To be clear, here is my chart.yaml file:

apiVersion: v2
name: my-app
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

dependencies:
  - name: mongodb
    version: 10.3.7
    repository: https://charts.bitnami.com/bitnami

3) This is specifically for local debugging and running tests - so auth is not a consideration here, but good point.

4) I had externalAccess:true - this was failing the init of the master because the internet IP was not accessible by the replicas in the cluster. I've turned that off and now it does work - and now I understand https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id I can connect within the cluster.

5) I doubt this because you must configure the replicaset with an ip/hostname that is reachable for all replica members and all external clients - by default bitnami is using the public ip that the pod is seeing: https://github.com/bitnami/charts/tree/master/bitnami/mongodb#using-nodeport-services though I'd like to hear more. Basically if I say "mongodb://localhost:30001,localhost:30002,localhost:30003", it will redirect to whatever hostnames the replicas were initiated with - so this has to be something reachable externally and internally.

franklin432 commented 3 years ago

@dave-yotta cool glad you got it figured out. In regards to not being able to connect externally im not sure why it does not work for you. It does for me. I get the hostname of each member by using its NodeName or NodeIP. For instance if your primary member is called test-mongodb-0 and has a node port of 30000, I simply do a "kubectl describe pod/test-mongodb-0 | grep Node: " This gives me the hostname/IP of the that primary member's node. I then use that in my connection string: ./mongo -u name -p pass --host hostnamefromNode:30000

And im in. For connection to the replicaset I do the same thing in finding the node hostname for the 2 secondaries and add them to the connection string.

dave-yotta commented 3 years ago

I imagine that will work for a kubernetes service (GKE,AKS,EKS etc) with internet facing public node ips (or a within a vpc that you are connected to if hostname is configured correctly I guess) - have you tried it on a local machine with a local kubernetes cluster e.g. Kubernetes for docker desktop on win10, microk8s on ubuntu, or kind on either?

franklin432 commented 3 years ago

@dave-yotta yup I have. Im running it from a closed off network (companies internal service) with no public internet access. This is on our local machine on our local K8s cluster.

dave-yotta commented 3 years ago

I mean on a local k8s cluster - not one on your company network but one on your actual dev machine? I guess you must be passing the hostname because otherwise it will curl for the public internet ip, from the readme: The pod will try to get the external ip of the node using curl -s https://ipinfo.io/ip unless externalAccess.service.domain is provided. Or perhaps your company hosts a dns for ipinfo.io and service maybe? :D

franklin432 commented 3 years ago

Ahh I see what you are saying. Aside from externalAccess.enabled set to true are you also setting externalAccess.autoDiscovery.enabled to true

carrodher commented 3 years ago

In relation to the above-mentioned topic about values-production.yaml we are going to stop supporting this file. As mentioned in the README (https://github.com/bitnami/charts/tree/master/bitnami/mongodb#production-configuration-and-horizontal-scaling) there are not many differences between both values files: more replicas, and replicaset, Pod Disruption Budget, and metrics enabled.

Apart from that, those "production" settings can't be useful for all users or use cases, we're setting 4 replicas as the default value in the production file, but is it 4 replicas a production configuration for all users? The answer is clearly no. So we decided to remove this file to avoid confusion about what we consider production or not, in this way users can set their preferred values.

In the end, the options are present in both files, just some default values are different between them:

59c59
- architecture: replicaset
+ architecture: standalone
241c241
- replicaCount: 4
+ replicaCount: 2
421c421
-   create: true
+   create: false
901c901
-   enabled: true
+   enabled: false

Apart from that, values-production.yaml are not tested on our internal CI/CD system and they are also difficult to maintain in sync when there are different sources of changes (internal tasks, users' contributions, PRs, etc)

dave-yotta commented 3 years ago

I've not tried autoDiscovery - looks like it's for loadbalancer. Not sure, but I think docker desktop for win10 at least does provide a load balancer - that sounds like it has a chance of working, will try.

franklin432 commented 3 years ago

@dave-yotta did you get it to work?

dave-yotta commented 3 years ago

Not tried it yet :D It's on my list 📜

franklin432 commented 3 years ago

read my post, it may help with what you are attempting to resolve. https://github.com/bitnami/charts/issues/5157

wingtch commented 3 years ago

I'm facing the same while nodeports and domain are set. The mongodb_is_primary_node_initiated script tries to initiate the replicaset primary node with the external IP (which is set by 'domain') and container pod (27017), not the frist nodeport in list. That is not reachable. In my values.yaml they are different.

  externalAccess:
    enabled: true
    service:
      type: NodePort
      nodePorts:
      - 32607
      - 32608
      - 32609
      domain: "10.100.4.133"
carrodher commented 3 years ago

I'm copying here the same message I post in the new issue:

Thanks for the detailed information, as there are a couple of issues from different users with different configurations and use cases I think it is something we need to evaluate properly. I'm going to create an internal task to dedicate some efforts (according to the rest of the priorities the team has) to implement any improvement in this chart. Apart from that, any PR to the chart itself or the documentation that can help improve the use of this chart is more than welcome

stale[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.