uzh-bf / klicker-uzh

KlickerUZH Audience Interaction Platform
https://www.klicker.uzh.ch
GNU Affero General Public License v3.0
35 stars 14 forks source link

Registration failed (NetworkError when attempting to fetch resource.) #2548

Closed georgschilling closed 2 years ago

georgschilling commented 2 years ago

Describe the bug Registration impossible

To Reproduce Steps to reproduce the behavior:

  1. On your local installed system go to 'https://full.qualifyed.domainname.de/user/registration'
  2. fill in the mandatory fields
  3. click on 'Absenden"
  4. See error

Expected behavior Registration should be successful, Mail should be send. User should be created.

Screenshots

Desktop (please complete the following information):

georgschilling commented 2 years ago

Bildschirmfoto 2021-09-29 um 13 25 53

rschlaefli commented 2 years ago

Hi @georgschilling

Thanks for your interest in our project, and sorry for the frustrations.

From my experience, your issue is most often related to CORS settings. We expose environment variables for the backend that need to be set correctly to the domain that the frontend is hosted on. Otherwise, the browser will not allow a connection from frontend to backend, and this error will occur.

I have added an example for a docker-compose deployment to the repository at https://github.com/uzh-bf/klicker-uzh/blob/dev/deploy/compose/docker-compose.yml. Please have a look there, as it should include all the necessary environment variables, using as an example the domain names of our public instance. We will extend this to a more in-depth example including a proxy with letsencrypt, but for now it is mainly the core services with the necessary environment variables. So you still need to setup a proxy for SSL termination and the correct domain to Docker service routings.

Also, note that our documentation is not up-to-date regarding self-hosted deployments (there has not been much interest in self-hosting so far). For production deployments, we have just released a new Helm chart for Kubernetes that we recommend (instead of using Docker Compose). Kubernetes will allow for improved orchestration and failover. Also, we recommend using a cloud service for S3 storage, and hosting the MongoDB instance outside of Docker, both for improved performance and reliability.

Our documentation is in the process of a full rework right now, so all of this will be documented soon.

Best, Roland

georgschilling commented 2 years ago

Hey @rschlaefli , I have now taken your docker file, adjusted it and apparently can start all services. All three services are set up in the nginx config, given a host name and have a certificate.

Unfortunately, not all services start now:

root@jscklick:~/neu# docker-compose ps
      Name                     Command                       State                                 Ports
-------------------------------------------------------------------------------------------------------------------------------
neu_backend_1       /sbin/tini -- node src/ser ...   Up (health: starting)   4000/tcp
neu_frontend_1      /sbin/tini -- node dist/se ...   Up (health: starting)   3000/tcp
neu_minio_1         /usr/bin/docker-entrypoint ...   Up                      9000/tcp, 0.0.0.0:9001->9001/tcp,:::9001->9001/tcp
neu_mongodb_1       docker-entrypoint.sh mongod      Up                      27017/tcp
neu_redis_cache_1   docker-entrypoint.sh redis ...   Up                      6379/tcp
neu_redis_exec_1    docker-entrypoint.sh redis ...   Up                      6379/tcp
root@jscklick:~/neu#

and also

root@jscklick:~/neu# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 0.0.0.0:9001            0.0.0.0:*               LISTEN      0          53844      8655/docker-proxy
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          25098      891/nginx: master p
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      101        20957      662/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      0          25073      849/sshd: /usr/sbin
tcp        0      0 127.0.0.1:38071         0.0.0.0:*               LISTEN      0          21473      714/containerd
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      0          25100      891/nginx: master p
tcp6       0      0 :::9001                 :::*                    LISTEN      0          53848      8662/docker-proxy
tcp6       0      0 :::80                   :::*                    LISTEN      0          25099      891/nginx: master p
tcp6       0      0 :::22                   :::*                    LISTEN      0          25075      849/sshd: /usr/sbin
tcp6       0      0 :::6556                 :::*                    LISTEN      0          21447      904/xinetd
udp        0      0 127.0.0.53:53           0.0.0.0:*                           101        20956      662/systemd-resolve
udp        0      0 134.94.168.241:68       0.0.0.0:*                           100        22689      660/systemd-network
root@jscklick:~/neu#

Do you have an idea for me?

Otherwise your file has already helped and I will be happy if the project receives more attention. I'm sure with a good installation guide and a little "marketing" it is sure to generate a lot of interest. Here on campus, at least everyone is really keen on it.

Greetings from Juelich, George

rschlaefli commented 2 years ago

Hi @georgschilling,

That looks much better. Can you provide the log output (E. g., docker-compose logs) so I can verify the setup or error messages? Credentials should not be printed in the logs, but also best to verify first.

I also surely hope that interest in the open source project will grow if we improve docs and promote it a little better. Most people tend to use our public free instance, which is totally fine, but it would be great to work together with other institutions or contributors.

Best, Roland

georgschilling commented 2 years ago

Hej @rschlaefli logs as follows:

root@jscklick:~/neu# docker-compose logs
Attaching to neu_frontend_1, neu_minio_1, neu_backend_1, neu_redis_exec_1, neu_mongodb_1, neu_redis_cache_1
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
backend_1      |     },
backend_1      |     "slack": {
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
backend_1      |   }
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:8) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
backend_1      | [klicker-api] Shutting down server
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
frontend_1     | [klicker-react] Successfully loaded configuration
frontend_1     | {
frontend_1     |   "api": {
frontend_1     |     "endpoint": "https://jscklick.zam.kfa-juelich.de/graphql",
frontend_1     |     "endpointWS": "wss://jscklick.zam.kfa-juelich.de/graphql"
frontend_1     |   },
frontend_1     |   "app": {
frontend_1     |     "baseUrl": "https://jscklick.zam.kfa-juelich.de",
frontend_1     |     "gzip": true,
frontend_1     |     "joinUrl": "jscklick.zam.kfa-juelich.de/join",
frontend_1     |     "persistQueries": false,
frontend_1     |     "port": 3000,
frontend_1     |     "trustProxy": true,
frontend_1     |     "withAai": false
frontend_1     |   },
frontend_1     |   "cache": {
frontend_1     |     "pages": {
frontend_1     |       "join": 10,
frontend_1     |       "landing": 600,
frontend_1     |       "qr": 300
frontend_1     |     },
frontend_1     |     "redis": {
frontend_1     |       "enabled": true,
frontend_1     |       "host": "redis_cache",
frontend_1     |       "password": "[Sensitive]",
frontend_1     |       "port": 6379,
frontend_1     |       "tls": false
frontend_1     |     }
frontend_1     |   },
frontend_1     |   "env": "production",
frontend_1     |   "s3": {
frontend_1     |     "rootUrl": "https://s3klick.zam.kfa-juelich.de/images"
frontend_1     |   },
frontend_1     |   "security": {
frontend_1     |     "cors": {
frontend_1     |       "credentials": true
frontend_1     |     },
frontend_1     |     "csp": {
frontend_1     |       "connectSrc": [
frontend_1     |         "'self'",
frontend_1     |         "https://jscklick.zam.kfa-juelich.de/graphql",
frontend_1     |         "wss://jscklick.zam.kfa-juelich.de/graphql"
frontend_1     |       ],
frontend_1     |       "defaultSrc": [
frontend_1     |         "'self'"
frontend_1     |       ],
frontend_1     |       "enabled": false,
frontend_1     |       "enforce": false,
frontend_1     |       "fontSrc": [
frontend_1     |         "'self'",
frontend_1     |         "fonts.gstatic.com"
frontend_1     |       ],
frontend_1     |       "imgSrc": [
frontend_1     |         "'self'",
frontend_1     |         "www.switch.ch",
frontend_1     |         "www.gstatic.com",
frontend_1     |         "tc-klicker-prod.s3.amazonaws.co"
frontend_1     |       ],
frontend_1     |       "scriptSrc": [
frontend_1     |         "'self'",
frontend_1     |         "'unsafe-inline'"
frontend_1     |       ],
frontend_1     |       "styleSrc": [
frontend_1     |         "'self'",
frontend_1     |         "'unsafe-inline'",
frontend_1     |         "maxcdn.bootstrapcdn.com",
frontend_1     |         "fonts.googleapis.com",
frontend_1     |         "cdnjs.cloudflare.com"
frontend_1     |       ]
frontend_1     |     },
frontend_1     |     "expectCt": {
frontend_1     |       "enabled": true,
frontend_1     |       "enforce": false,
frontend_1     |       "maxAge": 0
frontend_1     |     },
frontend_1     |     "fingerprinting": true,
frontend_1     |     "frameguard": {
frontend_1     |       "action": "sameorigin",
frontend_1     |       "ancestors": [
frontend_1     |         "'none'"
frontend_1     |       ],
frontend_1     |       "enabled": false
frontend_1     |     },
frontend_1     |     "hsts": {
frontend_1     |       "enabled": false,
frontend_1     |       "includeSubDomains": false,
frontend_1     |       "maxAge": 0
frontend_1     |     }
frontend_1     |   },
frontend_1     |   "services": {
frontend_1     |     "googleAnalytics": {
frontend_1     |       "enabled": false,
frontend_1     |       "trackingId": "[Sensitive]"
frontend_1     |     },
frontend_1     |     "logrocket": {
frontend_1     |       "appId": "[Sensitive]",
frontend_1     |       "enabled": false
frontend_1     |     },
frontend_1     |     "sentry": {
frontend_1     |       "enabled": false,
frontend_1     |       "url": "https://sentry.io",
frontend_1     |       "tracesSampleRate": 1
frontend_1     |     }
frontend_1     |   }
frontend_1     | }
frontend_1     | [klicker-react] Starting up...
frontend_1     | [redis] Connected to redis (db 0) for SSR caching
frontend_1     | [klicker-react] Enabling trust proxy mode for IP pass-through
frontend_1     | [klicker-react] Ready on localhost:3000
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
backend_1      |     },
backend_1      |     "slack": {
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
backend_1      |   }
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:7) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
backend_1      | [klicker-api] Shutting down server
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
backend_1      |     },
backend_1      |     "slack": {
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
backend_1      |   }
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:9) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
backend_1      | [klicker-api] Shutting down server
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
backend_1      |     },
backend_1      |     "slack": {
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
backend_1      |   }
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
backend_1      | [ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.5:6379
backend_1      |     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)
backend_1      | [ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.5:6379
backend_1      |     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:7) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
backend_1      | [klicker-api] Shutting down server
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
backend_1      |     },
backend_1      |     "slack": {
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
backend_1      |   }
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:8) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
backend_1      | [klicker-api] Shutting down server
backend_1      | [klicker-api] Successfully loaded configuration
backend_1      | {
backend_1      |   "app": {
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
backend_1      |     "gzip": true,
backend_1      |     "https": true,
backend_1      |     "port": 4000,
backend_1      |     "secret": "[Sensitive]",
backend_1      |     "secure": true,
backend_1      |     "trustProxy": true
backend_1      |   },
backend_1      |   "cache": {
backend_1      |     "redis": {
backend_1      |       "host": "redis_cache",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     },
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
minio_1        | Exiting on signal: TERMINATED
minio_1        |
minio_1        |  You are running an older version of MinIO released 1 week ago
minio_1        |  Update: Run `mc admin update`
minio_1        |
minio_1        |
minio_1        | API: http://172.22.0.2:9000  http://127.0.0.1:9000
minio_1        |
minio_1        | Console: http://172.22.0.2:9001 http://127.0.0.1:9001
minio_1        |
minio_1        | Documentation: https://docs.min.io
minio_1        | WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | about to fork child process, waiting until server is ready for connections.
mongodb_1      | forked process: 30
mongodb_1      | 2021-09-30T09:33:27.047+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
mongodb_1      | 2021-09-30T09:33:27.054+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] MongoDB starting : pid=30 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T09:33:27.061+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T09:33:27.062+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T09:33:27.062+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27017, ssl: { mode: "disabled" } }, processManagement: { fork: true, pidFilePath: "/tmp/docker-entrypoint-temp-mongod.pid" }, systemLog: { destination: "file", logAppend: true, path: "/proc/1/fd/1" } }
mongodb_1      | 2021-09-30T09:33:27.062+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T09:33:27.062+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T09:33:27.062+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T09:33:27.062+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T09:33:28.271+0000 I STORAGE  [initandlisten] WiredTiger message [1632994408:271014][30:0x7fc640413a80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T09:33:28.598+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T09:33:28.845+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T09:33:28.970+0000 I CONTROL  [initandlisten]
mongodb_1      | 2021-09-30T09:33:28.970+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
mongodb_1      | 2021-09-30T09:33:28.970+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
mongodb_1      | 2021-09-30T09:33:28.970+0000 I CONTROL  [initandlisten]
mongodb_1      | 2021-09-30T09:33:28.989+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 8de2b358-ecc2-4081-9397-2a768d9cd56a
mongodb_1      | 2021-09-30T09:33:29.248+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0
mongodb_1      | 2021-09-30T09:33:29.249+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T09:33:29.249+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: f9112c5e-fba7-48fa-acef-79c36e77ded0
mongodb_1      | 2021-09-30T09:33:29.478+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T09:33:29.482+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | child process started successfully, parent exiting
mongodb_1      | 2021-09-30T09:33:29.513+0000 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
mongodb_1      | 2021-09-30T09:33:29.524+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: f9d11a42-32ee-41ef-a5bd-f5efe2bc4be7
mongodb_1      | 2021-09-30T09:33:30.360+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
mongodb_1      | 2021-09-30T09:33:30.360+0000 I INDEX    [LogicalSessionCacheRefresh]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1      | 2021-09-30T09:33:30.362+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:30.362+0000 I COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 849ms
mongodb_1      | 2021-09-30T09:33:30.649+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44254 #1 (1 connection now open)
mongodb_1      | 2021-09-30T09:33:30.690+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:44254 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.27" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongodb_1      | 2021-09-30T09:33:30.693+0000 I NETWORK  [conn1] end connection 127.0.0.1:44254 (0 connections now open)
mongodb_1      | 2021-09-30T09:33:30.795+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:44258 #2 (1 connection now open)
mongodb_1      | 2021-09-30T09:33:30.796+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:44258 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.27" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongodb_1      | 2021-09-30T09:33:30.908+0000 I STORAGE  [conn2] createCollection: admin.system.users with generated UUID: 4d76e43b-ca60-4b55-b768-cded51560712
mongodb_1      | 2021-09-30T09:33:31.261+0000 I COMMAND  [conn2] command admin.system.users appName: "MongoDB Shell" command: insert { insert: "system.users", bypassDocumentValidation: false, ordered: true, $db: "admin" } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { W: 3 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 353ms
mongodb_1      | 2021-09-30T09:33:31.261+0000 I COMMAND  [conn2] command admin.$cmd appName: "MongoDB Shell" command: createUser { createUser: "klicker", pwd: "xxx", roles: [ { role: "root", db: "admin" } ], digestPassword: true, writeConcern: { w: "majority", wtimeout: 600000.0 }, lsid: { id: UUID("851740e3-5482-42f9-89dc-c2f7a307396c") }, $db: "admin" } numYields:0 reslen:38 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { W: 4 } }, Collection: { acquireCount: { w: 3 } } } storage:{} protocol:op_msg 461ms
mongodb_1      | Successfully added user: {
mongodb_1      |    "user" : "klicker",
mongodb_1      |    "roles" : [
mongodb_1      |        {
mongodb_1      |            "role" : "root",
mongodb_1      |            "db" : "admin"
mongodb_1      |        }
mongodb_1      |    ]
mongodb_1      | }
mongodb_1      | 2021-09-30T09:33:31.262+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: No such file or directory
mongodb_1      | 2021-09-30T09:33:31.267+0000 I NETWORK  [conn2] end connection 127.0.0.1:44258 (0 connections now open)
mongodb_1      |
mongodb_1      | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
mongodb_1      |
mongodb_1      | 2021-09-30T09:33:31.295+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
mongodb_1      | 2021-09-30T09:33:31.297+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | killing process with pid: 30
mongodb_1      | 2021-09-30T09:33:31.300+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T09:33:31.300+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T09:33:31.300+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T09:33:31.301+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T09:33:31.301+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T09:33:31.301+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T09:33:31.301+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T09:33:31.301+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T09:33:31.301+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T09:33:31.301+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T09:33:31.301+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T09:33:31.301+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T09:33:31.301+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T09:33:31.301+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T09:33:31.301+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T09:33:31.302+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T09:33:31.302+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T09:33:31.304+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T09:33:31.304+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T09:33:31.304+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T09:33:31.338+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T09:33:31.338+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T09:33:32.309+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T09:33:32.309+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T09:33:32.309+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T09:33:32.309+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | MongoDB init process complete; ready for start up.
mongodb_1      |
mongodb_1      | 2021-09-30T09:33:33.325+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T09:33:33.329+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T09:33:33.329+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T09:33:33.329+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T09:33:33.329+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T09:33:33.329+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T09:33:33.329+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T09:33:34.247+0000 I STORAGE  [initandlisten] WiredTiger message [1632994414:247075][1:0x7f971c06da80], txn-recover: Main recovery loop: starting at 1/27392 to 2/256
mongodb_1      | 2021-09-30T09:33:34.351+0000 I STORAGE  [initandlisten] WiredTiger message [1632994414:351591][1:0x7f971c06da80], txn-recover: Recovering log 1 through 2
mongodb_1      | 2021-09-30T09:33:34.441+0000 I STORAGE  [initandlisten] WiredTiger message [1632994414:441423][1:0x7f971c06da80], txn-recover: Recovering log 2 through 2
mongodb_1      | 2021-09-30T09:33:34.494+0000 I STORAGE  [initandlisten] WiredTiger message [1632994414:494113][1:0x7f971c06da80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T09:33:34.952+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T09:33:34.967+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T09:33:35.024+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T09:33:35.028+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T09:33:35.031+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T09:33:35.236+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38404 #1 (1 connection now open)
mongodb_1      | 2021-09-30T09:33:35.241+0000 I NETWORK  [conn1] received client metadata from 172.20.0.2:38404 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.246+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38406 #2 (2 connections now open)
mongodb_1      | 2021-09-30T09:33:35.248+0000 I NETWORK  [conn2] received client metadata from 172.20.0.2:38406 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.366+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38406
mongodb_1      | 2021-09-30T09:33:35.381+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38408 #3 (3 connections now open)
mongodb_1      | 2021-09-30T09:33:35.381+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38410 #4 (4 connections now open)
mongodb_1      | 2021-09-30T09:33:35.381+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38412 #5 (5 connections now open)
mongodb_1      | 2021-09-30T09:33:35.381+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38414 #6 (6 connections now open)
mongodb_1      | 2021-09-30T09:33:35.381+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38416 #7 (7 connections now open)
mongodb_1      | 2021-09-30T09:33:35.382+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38418 #8 (8 connections now open)
mongodb_1      | 2021-09-30T09:33:35.384+0000 I NETWORK  [conn4] received client metadata from 172.20.0.2:38410 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.384+0000 I NETWORK  [conn3] received client metadata from 172.20.0.2:38408 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.384+0000 I NETWORK  [conn6] received client metadata from 172.20.0.2:38414 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.385+0000 I NETWORK  [conn7] received client metadata from 172.20.0.2:38416 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.388+0000 I NETWORK  [conn5] received client metadata from 172.20.0.2:38412 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.388+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38420 #9 (9 connections now open)
mongodb_1      | 2021-09-30T09:33:35.392+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38422 #10 (10 connections now open)
mongodb_1      | 2021-09-30T09:33:35.392+0000 I NETWORK  [conn8] received client metadata from 172.20.0.2:38418 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.396+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:38424 #11 (11 connections now open)
mongodb_1      | 2021-09-30T09:33:35.396+0000 I NETWORK  [conn9] received client metadata from 172.20.0.2:38420 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.397+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38408
mongodb_1      | 2021-09-30T09:33:35.398+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38410
mongodb_1      | 2021-09-30T09:33:35.398+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38414
mongodb_1      | 2021-09-30T09:33:35.400+0000 I NETWORK  [conn10] received client metadata from 172.20.0.2:38422 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.402+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38412
mongodb_1      | 2021-09-30T09:33:35.402+0000 I STORAGE  [conn3] createCollection: klicker.questions with generated UUID: 16c81744-bd93-4ede-b43a-788942b1e683
mongodb_1      | 2021-09-30T09:33:35.402+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38416
mongodb_1      | 2021-09-30T09:33:35.404+0000 I NETWORK  [conn11] received client metadata from 172.20.0.2:38424 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T09:33:35.407+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38420
mongodb_1      | 2021-09-30T09:33:35.408+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38422
mongodb_1      | 2021-09-30T09:33:35.409+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38418
mongodb_1      | 2021-09-30T09:33:35.410+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.2:38424
mongodb_1      | 2021-09-30T09:33:35.731+0000 I INDEX    [conn3] build index on: klicker.questions properties: { v: 2, key: { type: 1 }, name: "type_1", ns: "klicker.questions", background: true }
mongodb_1      | 2021-09-30T09:33:35.732+0000 I STORAGE  [conn4] createCollection: klicker.files with generated UUID: 37867978-4837-45c4-a274-68c0d94a918f
mongodb_1      | 2021-09-30T09:33:36.032+0000 I INDEX    [conn4] build index on: klicker.files properties: { v: 2, key: { type: 1 }, name: "type_1", ns: "klicker.files", background: true }
mongodb_1      | 2021-09-30T09:33:36.033+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:36.033+0000 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:36.033+0000 I STORAGE  [conn7] createCollection: klicker.users with generated UUID: 5a7027aa-06ff-4678-8b4f-88d3a358f786
mongodb_1      | 2021-09-30T09:33:36.265+0000 I INDEX    [conn7] build index on: klicker.users properties: { v: 2, unique: true, key: { email: 1 }, name: "email_1", ns: "klicker.users", background: true }
mongodb_1      | 2021-09-30T09:33:36.266+0000 I STORAGE  [conn2] createCollection: klicker.sessions with generated UUID: 7107a3f6-e2aa-4db3-a968-e3c41be8c832
backend_1      |         "strict": false
mongodb_1      | 2021-09-30T09:33:36.607+0000 I INDEX    [conn2] build index on: klicker.sessions properties: { v: 2, key: { name: 1 }, name: "name_1", ns: "klicker.sessions", background: true }
redis_cache_1  | 1:C 30 Sep 2021 09:33:26.243 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
backend_1      |       },
mongodb_1      | 2021-09-30T09:33:36.607+0000 I STORAGE  [conn6] createCollection: klicker.questioninstances with generated UUID: 55173924-f073-4d61-9f8c-086c867303c3
redis_cache_1  | 1:C 30 Sep 2021 09:33:26.243 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_exec_1   | 1:C 30 Sep 2021 09:33:25.461 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
backend_1      |       "byFP": {
redis_cache_1  | 1:C 30 Sep 2021 09:33:26.244 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mongodb_1      | 2021-09-30T09:33:36.920+0000 I INDEX    [conn6] build index on: klicker.questioninstances properties: { v: 2, key: { question: 1 }, name: "question_1", ns: "klicker.questioninstances", background: true }
redis_exec_1   | 1:C 30 Sep 2021 09:33:25.462 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
backend_1      |         "enabled": true,
redis_cache_1  | 1:M 30 Sep 2021 09:33:26.245 * Running mode=standalone, port=6379.
mongodb_1      | 2021-09-30T09:33:36.921+0000 I STORAGE  [conn5] createCollection: klicker.tags with generated UUID: 78343b58-3847-4f20-ade9-0b3be0f7c5e1
redis_exec_1   | 1:C 30 Sep 2021 09:33:25.462 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
backend_1      |         "strict": false
redis_cache_1  | 1:M 30 Sep 2021 09:33:26.245 # Server initialized
mongodb_1      | 2021-09-30T09:33:37.350+0000 I INDEX    [conn5] build index on: klicker.tags properties: { v: 2, key: { name: 1 }, name: "name_1", ns: "klicker.tags", background: true }
redis_exec_1   | 1:M 30 Sep 2021 09:33:25.467 * Running mode=standalone, port=6379.
backend_1      |       }
redis_cache_1  | 1:M 30 Sep 2021 09:33:26.245 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
mongodb_1      | 2021-09-30T09:33:37.350+0000 I COMMAND  [conn3] command klicker.$cmd command: createIndexes { createIndexes: "questions", indexes: [ { name: "type_1", key: { type: 1 }, background: true } ], lsid: { id: UUID("e1d3c22d-af71-432e-a27b-a13e4c6db15e") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 1, W: 2 }, timeAcquiringMicros: { w: 300859, W: 1316622 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1948ms
redis_exec_1   | 1:M 30 Sep 2021 09:33:25.467 # Server initialized
backend_1      |     },
redis_cache_1  | 1:M 30 Sep 2021 09:33:26.245 * Ready to accept connections
mongodb_1      | 2021-09-30T09:33:37.350+0000 I COMMAND  [conn4] command klicker.$cmd command: createIndexes { createIndexes: "files", indexes: [ { name: "type_1", key: { type: 1 }, background: true } ], lsid: { id: UUID("71b87476-c469-4c1e-a0b0-9ea8d59448e3") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { W: 2 }, timeAcquiringMicros: { W: 1646455 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1949ms
backend_1      |     "frameguard": {
redis_cache_1  | 1:signal-handler (1632994454) Received SIGTERM scheduling shutdown...
mongodb_1      | 2021-09-30T09:33:37.350+0000 I INDEX    [conn2] build index done.  scanned 0 total records. 0 secs
redis_exec_1   | 1:M 30 Sep 2021 09:33:25.467 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
backend_1      |       "ancestors": [
redis_cache_1  | 1:M 30 Sep 2021 09:34:14.566 # User requested shutdown...
mongodb_1      | 2021-09-30T09:33:37.350+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
redis_exec_1   | 1:M 30 Sep 2021 09:33:25.467 * Ready to accept connections
backend_1      |         "'none'"
redis_cache_1  | 1:M 30 Sep 2021 09:34:14.566 * Saving the final RDB snapshot before exiting.
redis_cache_1  | 1:M 30 Sep 2021 09:34:14.696 * DB saved on disk
backend_1      |       ],
redis_cache_1  | 1:M 30 Sep 2021 09:34:14.696 # Redis is now ready to exit, bye bye...
redis_cache_1  | 1:C 30 Sep 2021 10:22:47.349 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_exec_1   | 1:signal-handler (1632994454) Received SIGTERM scheduling shutdown...
backend_1      |       "enabled": false
redis_cache_1  | 1:C 30 Sep 2021 10:22:47.349 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cache_1  | 1:C 30 Sep 2021 10:22:47.349 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_exec_1   | 1:M 30 Sep 2021 09:34:14.500 # User requested shutdown...
backend_1      |     },
redis_cache_1  | 1:M 30 Sep 2021 10:22:47.351 * Running mode=standalone, port=6379.
redis_cache_1  | 1:M 30 Sep 2021 10:22:47.351 # Server initialized
backend_1      |     "hsts": {
redis_cache_1  | 1:M 30 Sep 2021 10:22:47.351 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cache_1  | 1:M 30 Sep 2021 10:22:47.375 * DB loaded from disk: 0.023 seconds
redis_exec_1   | 1:M 30 Sep 2021 09:34:14.500 * Saving the final RDB snapshot before exiting.
backend_1      |       "enabled": false,
redis_cache_1  | 1:M 30 Sep 2021 10:22:47.375 * Ready to accept connections
redis_cache_1  | 1:signal-handler (1632997477) Received SIGTERM scheduling shutdown...
backend_1      |       "includeSubdomains": false,
redis_cache_1  | 1:M 30 Sep 2021 10:24:37.541 # User requested shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:24:37.541 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 09:34:14.687 * DB saved on disk
backend_1      |       "maxAge": 0
redis_cache_1  | 1:M 30 Sep 2021 10:24:37.781 * DB saved on disk
redis_cache_1  | 1:M 30 Sep 2021 10:24:37.781 # Redis is now ready to exit, bye bye...
redis_exec_1   | 1:M 30 Sep 2021 09:34:14.687 # Redis is now ready to exit, bye bye...
backend_1      |     },
redis_cache_1  | 1:C 30 Sep 2021 10:26:09.413 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cache_1  | 1:C 30 Sep 2021 10:26:09.413 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
backend_1      |     "rateLimit": {
redis_cache_1  | 1:C 30 Sep 2021 10:26:09.413 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:M 30 Sep 2021 10:26:09.425 * Running mode=standalone, port=6379.
redis_exec_1   | 1:C 30 Sep 2021 10:22:45.649 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
backend_1      |       "enabled": true,
redis_cache_1  | 1:M 30 Sep 2021 10:26:09.425 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 10:26:09.425 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:C 30 Sep 2021 10:22:45.649 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
backend_1      |       "max": 2500,
redis_cache_1  | 1:M 30 Sep 2021 10:26:09.425 * DB loaded from disk: 0.000 seconds
redis_cache_1  | 1:M 30 Sep 2021 10:26:09.425 * Ready to accept connections
redis_exec_1   | 1:C 30 Sep 2021 10:22:45.649 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
backend_1      |       "windowMs": 300000
redis_cache_1  | 1:signal-handler (1632997633) Received SIGTERM scheduling shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:27:13.094 # User requested shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:27:13.094 * Saving the final RDB snapshot before exiting.
backend_1      |     }
redis_cache_1  | 1:M 30 Sep 2021 10:27:13.231 * DB saved on disk
redis_cache_1  | 1:M 30 Sep 2021 10:27:13.231 # Redis is now ready to exit, bye bye...
redis_cache_1  | 1:C 30 Sep 2021 10:28:52.327 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
backend_1      |   },
backend_1      |   "services": {
mongodb_1      | 2021-09-30T09:33:37.350+0000 I INDEX    [conn7] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:37.350+0000 I INDEX    [conn6] build index done.  scanned 0 total records. 0 secs
redis_cache_1  | 1:C 30 Sep 2021 10:28:52.327 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cache_1  | 1:C 30 Sep 2021 10:28:52.327 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:M 30 Sep 2021 10:28:52.331 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:22:45.651 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:22:45.651 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 10:28:52.331 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 10:28:52.331 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cache_1  | 1:M 30 Sep 2021 10:28:52.332 * DB loaded from disk: 0.000 seconds
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
redis_cache_1  | 1:M 30 Sep 2021 10:28:52.332 * Ready to accept connections
redis_cache_1  | 1:signal-handler (1632997809) Received SIGTERM scheduling shutdown...
redis_exec_1   | 1:M 30 Sep 2021 10:22:45.651 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:22:45.651 * DB loaded from disk: 0.000 seconds
redis_cache_1  | 1:M 30 Sep 2021 10:30:09.923 # User requested shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:30:09.924 * Saving the final RDB snapshot before exiting.
redis_cache_1  | 1:M 30 Sep 2021 10:30:09.999 * DB saved on disk
mongodb_1      | 2021-09-30T09:33:37.350+0000 I COMMAND  [conn2] command klicker.$cmd command: createIndexes { createIndexes: "sessions", indexes: [ { name: "name_1", key: { name: 1 }, background: true } ], lsid: { id: UUID("77387157-33d4-4591-9106-4f96831bdf82") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 1371268, W: 233247 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1946ms
mongodb_1      | 2021-09-30T09:33:37.351+0000 I COMMAND  [conn5] command klicker.$cmd command: createIndexes { createIndexes: "tags", indexes: [ { name: "name_1", key: { name: 1 }, background: true } ], lsid: { id: UUID("c498e374-9caf-4995-b273-e9316dedce23") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 1, W: 2 }, timeAcquiringMicros: { w: 627652, W: 888308 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1945ms
mongodb_1      | 2021-09-30T09:33:37.351+0000 I COMMAND  [conn7] command klicker.$cmd command: createIndexes { createIndexes: "users", indexes: [ { name: "email_1", key: { email: 1 }, unique: true, background: true } ], lsid: { id: UUID("2c51b9c2-25e3-4834-bb39-43a29300e1b9") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 1712303, W: 1133 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1946ms
redis_cache_1  | 1:M 30 Sep 2021 10:30:10.000 # Redis is now ready to exit, bye bye...
redis_cache_1  | 1:C 30 Sep 2021 10:41:27.378 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mongodb_1      | 2021-09-30T09:33:37.351+0000 I COMMAND  [conn6] command klicker.$cmd command: createIndexes { createIndexes: "questioninstances", indexes: [ { name: "question_1", key: { question: 1 }, background: true } ], lsid: { id: UUID("8259d593-d4e9-4024-bb46-120d0af7d246") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 1 }, timeAcquiringMicros: { w: 1060118, W: 574685 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 1949ms
mongodb_1      | 2021-09-30T09:33:37.431+0000 I INDEX    [conn5] build index on: klicker.questions properties: { v: 2, key: { user: 1 }, name: "user_1", ns: "klicker.questions", background: true }
redis_cache_1  | 1:C 30 Sep 2021 10:41:27.378 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cache_1  | 1:C 30 Sep 2021 10:41:27.378 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:M 30 Sep 2021 10:41:27.381 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:22:45.651 * Ready to accept connections
redis_exec_1   | 1:signal-handler (1632997477) Received SIGTERM scheduling shutdown...
redis_exec_1   | 1:M 30 Sep 2021 10:24:37.525 # User requested shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:41:27.381 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 10:41:27.381 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:24:37.525 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 10:24:37.742 * DB saved on disk
redis_cache_1  | 1:M 30 Sep 2021 10:41:27.401 * DB loaded from disk: 0.020 seconds
redis_cache_1  | 1:M 30 Sep 2021 10:41:27.401 * Ready to accept connections
mongodb_1      | 2021-09-30T09:33:37.574+0000 I INDEX    [conn2] build index on: klicker.questioninstances properties: { v: 2, key: { session: 1 }, name: "session_1", ns: "klicker.questioninstances", background: true }
redis_cache_1  | 1:signal-handler (1632999123) Received SIGTERM scheduling shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:52:03.947 # User requested shutdown...
redis_exec_1   | 1:M 30 Sep 2021 10:24:37.742 # Redis is now ready to exit, bye bye...
mongodb_1      | 2021-09-30T09:33:37.633+0000 I INDEX    [conn9] build index on: klicker.sessions properties: { v: 2, key: { status: 1 }, name: "status_1", ns: "klicker.sessions", background: true }
redis_cache_1  | 1:M 30 Sep 2021 10:52:03.947 * Saving the final RDB snapshot before exiting.
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
mongodb_1      | 2021-09-30T09:33:37.633+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
redis_cache_1  | 1:M 30 Sep 2021 10:52:04.062 * DB saved on disk
mongodb_1      | 2021-09-30T09:33:37.708+0000 I INDEX    [conn6] build index on: klicker.tags properties: { v: 2, key: { user: 1 }, name: "user_1", ns: "klicker.tags", background: true }
redis_cache_1  | 1:M 30 Sep 2021 10:52:04.062 # Redis is now ready to exit, bye bye...
redis_cache_1  | 1:C 30 Sep 2021 10:53:17.399 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cache_1  | 1:C 30 Sep 2021 10:53:17.399 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
mongodb_1      | 2021-09-30T09:33:37.816+0000 I INDEX    [conn4] build index on: klicker.users properties: { v: 2, unique: true, key: { shortname: 1 }, name: "shortname_1", ns: "klicker.users", background: true }
redis_exec_1   | 1:C 30 Sep 2021 10:26:10.082 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_exec_1   | 1:C 30 Sep 2021 10:26:10.082 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_exec_1   | 1:C 30 Sep 2021 10:26:10.082 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:C 30 Sep 2021 10:53:17.399 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:M 30 Sep 2021 10:53:17.400 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:26:10.084 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:26:10.084 # Server initialized
redis_exec_1   | 1:M 30 Sep 2021 10:26:10.084 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cache_1  | 1:M 30 Sep 2021 10:53:17.400 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 10:53:17.400 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:26:10.084 * DB loaded from disk: 0.000 seconds
redis_exec_1   | 1:M 30 Sep 2021 10:26:10.084 * Ready to accept connections
redis_exec_1   | 1:signal-handler (1632997633) Received SIGTERM scheduling shutdown...
redis_exec_1   | 1:M 30 Sep 2021 10:27:13.142 # User requested shutdown...
redis_cache_1  | 1:M 30 Sep 2021 10:53:17.400 * DB loaded from disk: 0.000 seconds
redis_cache_1  | 1:M 30 Sep 2021 10:53:17.400 * Ready to accept connections
redis_exec_1   | 1:M 30 Sep 2021 10:27:13.142 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 10:27:13.282 * DB saved on disk
redis_exec_1   | 1:M 30 Sep 2021 10:27:13.283 # Redis is now ready to exit, bye bye...
redis_exec_1   | 1:C 30 Sep 2021 10:28:51.236 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cache_1  | 1:signal-handler (1633000586) Received SIGTERM scheduling shutdown...
redis_cache_1  | 1:M 30 Sep 2021 11:16:26.239 # User requested shutdown...
redis_exec_1   | 1:C 30 Sep 2021 10:28:51.237 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_exec_1   | 1:C 30 Sep 2021 10:28:51.237 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_exec_1   | 1:M 30 Sep 2021 10:28:51.239 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:28:51.239 # Server initialized
redis_cache_1  | 1:M 30 Sep 2021 11:16:26.239 * Saving the final RDB snapshot before exiting.
redis_cache_1  | 1:M 30 Sep 2021 11:16:26.406 * DB saved on disk
redis_exec_1   | 1:M 30 Sep 2021 10:28:51.239 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:28:51.239 * DB loaded from disk: 0.000 seconds
redis_exec_1   | 1:M 30 Sep 2021 10:28:51.239 * Ready to accept connections
redis_exec_1   | 1:signal-handler (1632997809) Received SIGTERM scheduling shutdown...
redis_cache_1  | 1:M 30 Sep 2021 11:16:26.406 # Redis is now ready to exit, bye bye...
redis_cache_1  | 1:C 30 Sep 2021 11:17:50.351 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_exec_1   | 1:M 30 Sep 2021 10:30:09.932 # User requested shutdown...
backend_1      |     },
redis_exec_1   | 1:M 30 Sep 2021 10:30:09.932 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 10:30:09.999 * DB saved on disk
redis_exec_1   | 1:M 30 Sep 2021 10:30:09.999 # Redis is now ready to exit, bye bye...
mongodb_1      | 2021-09-30T09:33:37.884+0000 I INDEX    [conn7] build index on: klicker.files properties: { v: 2, key: { user: 1 }, name: "user_1", ns: "klicker.files", background: true }
redis_cache_1  | 1:C 30 Sep 2021 11:17:50.351 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cache_1  | 1:C 30 Sep 2021 11:17:50.351 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_exec_1   | 1:C 30 Sep 2021 10:41:27.378 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mongodb_1      | 2021-09-30T09:33:37.884+0000 I INDEX    [conn9] build index done.  scanned 0 total records. 0 secs
redis_cache_1  | 1:M 30 Sep 2021 11:17:50.355 * Running mode=standalone, port=6379.
redis_cache_1  | 1:M 30 Sep 2021 11:17:50.355 # Server initialized
redis_exec_1   | 1:C 30 Sep 2021 10:41:27.378 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
mongodb_1      | 2021-09-30T09:33:37.884+0000 I INDEX    [conn2] build index done.  scanned 0 total records. 0 secs
redis_exec_1   | 1:C 30 Sep 2021 10:41:27.378 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_cache_1  | 1:M 30 Sep 2021 11:17:50.355 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cache_1  | 1:M 30 Sep 2021 11:17:50.356 * DB loaded from disk: 0.000 seconds
mongodb_1      | 2021-09-30T09:33:37.885+0000 I INDEX    [conn6] build index done.  scanned 0 total records. 0 secs
redis_exec_1   | 1:M 30 Sep 2021 10:41:27.382 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:41:27.382 # Server initialized
mongodb_1      | 2021-09-30T09:33:37.885+0000 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
mongodb_1      | 2021-09-30T09:33:37.885+0000 I INDEX    [conn7] build index done.  scanned 0 total records. 0 secs
redis_exec_1   | 1:M 30 Sep 2021 10:41:27.382 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:41:27.409 * DB loaded from disk: 0.027 seconds
redis_exec_1   | 1:M 30 Sep 2021 10:41:27.409 * Ready to accept connections
mongodb_1      | 2021-09-30T09:33:37.885+0000 I COMMAND  [conn5] command klicker.$cmd command: createIndexes { createIndexes: "questions", indexes: [ { name: "user_1", key: { user: 1 }, background: true } ], lsid: { id: UUID("e1d3c22d-af71-432e-a27b-a13e4c6db15e") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 201915, W: 251123 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 526ms
mongodb_1      | 2021-09-30T09:33:37.885+0000 I COMMAND  [conn9] command klicker.$cmd command: createIndexes { createIndexes: "sessions", indexes: [ { name: "status_1", key: { status: 1 }, background: true } ], lsid: { id: UUID("77387157-33d4-4591-9106-4f96831bdf82") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 321150, W: 143217 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 524ms
mongodb_1      | 2021-09-30T09:33:37.885+0000 I COMMAND  [conn2] command klicker.$cmd command: createIndexes { createIndexes: "questioninstances", indexes: [ { name: "session_1", key: { session: 1 }, background: true } ], lsid: { id: UUID("8259d593-d4e9-4024-bb46-120d0af7d246") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 382377, W: 860 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 526ms
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
mongodb_1      | 2021-09-30T09:33:37.886+0000 I COMMAND  [conn4] command klicker.$cmd command: createIndexes { createIndexes: "users", indexes: [ { name: "shortname_1", key: { shortname: 1 }, unique: true, background: true } ], lsid: { id: UUID("2c51b9c2-25e3-4834-bb39-43a29300e1b9") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 137658, W: 277159 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 523ms
redis_exec_1   | 1:signal-handler (1632999123) Received SIGTERM scheduling shutdown...
redis_exec_1   | 1:M 30 Sep 2021 10:52:03.941 # User requested shutdown...
mongodb_1      | 2021-09-30T09:33:37.886+0000 I COMMAND  [conn7] command klicker.$cmd command: createIndexes { createIndexes: "files", indexes: [ { name: "user_1", key: { user: 1 }, background: true } ], lsid: { id: UUID("71b87476-c469-4c1e-a0b0-9ea8d59448e3") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 71890, W: 385680 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 525ms
mongodb_1      | 2021-09-30T09:33:37.886+0000 I COMMAND  [conn6] command klicker.$cmd command: createIndexes { createIndexes: "tags", indexes: [ { name: "user_1", key: { user: 1 }, background: true } ], lsid: { id: UUID("c498e374-9caf-4995-b273-e9316dedce23") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 246825, W: 202702 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 525ms
mongodb_1      | 2021-09-30T09:33:37.958+0000 I INDEX    [conn5] build index on: klicker.questions properties: { v: 2, key: { isArchived: 1 }, name: "isArchived_1", ns: "klicker.questions", background: true }
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
mongodb_1      | 2021-09-30T09:33:38.053+0000 I INDEX    [conn4] build index on: klicker.sessions properties: { v: 2, key: { user: 1 }, name: "user_1", ns: "klicker.sessions", background: true }
mongodb_1      | 2021-09-30T09:33:38.053+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:38.242+0000 I INDEX    [conn2] build index on: klicker.questioninstances properties: { v: 2, key: { user: 1 }, name: "user_1", ns: "klicker.questioninstances", background: true }
redis_cache_1  | 1:M 30 Sep 2021 11:17:50.356 * Ready to accept connections
backend_1      |     },
mongodb_1      | 2021-09-30T09:33:38.242+0000 I INDEX    [conn4] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:38.242+0000 I COMMAND  [conn5] command klicker.$cmd command: createIndexes { createIndexes: "questions", indexes: [ { name: "isArchived_1", key: { isArchived: 1 }, background: true } ], lsid: { id: UUID("e1d3c22d-af71-432e-a27b-a13e4c6db15e") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 1, W: 1 }, timeAcquiringMicros: { w: 94645, W: 188739 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 348ms
redis_exec_1   | 1:M 30 Sep 2021 10:52:03.941 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 10:52:04.062 * DB saved on disk
mongodb_1      | 2021-09-30T09:33:38.242+0000 I INDEX    [conn2] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:33:38.242+0000 I COMMAND  [conn4] command klicker.$cmd command: createIndexes { createIndexes: "sessions", indexes: [ { name: "user_1", key: { user: 1 }, background: true } ], lsid: { id: UUID("77387157-33d4-4591-9106-4f96831bdf82") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 251785, W: 385 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 346ms
backend_1      |     "slack": {
backend_1      |       "enabled": false,
mongodb_1      | 2021-09-30T09:33:38.243+0000 I COMMAND  [conn2] command klicker.$cmd command: createIndexes { createIndexes: "questioninstances", indexes: [ { name: "user_1", key: { user: 1 }, background: true } ], lsid: { id: UUID("8259d593-d4e9-4024-bb46-120d0af7d246") }, $db: "klicker" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 3, W: 2 }, acquireWaitCount: { w: 2, W: 2 }, timeAcquiringMicros: { w: 64072, W: 95043 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 348ms
mongodb_1      | 2021-09-30T09:33:38.321+0000 I INDEX    [conn3] build index on: klicker.questions properties: { v: 2, key: { isDeleted: 1 }, name: "isDeleted_1", ns: "klicker.questions", background: true }
redis_exec_1   | 1:M 30 Sep 2021 10:52:04.062 # Redis is now ready to exit, bye bye...
redis_exec_1   | 1:C 30 Sep 2021 10:53:17.138 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mongodb_1      | 2021-09-30T09:33:38.321+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1      | 2021-09-30T09:34:14.459+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
backend_1      |       "webhook": "[Sensitive]"
backend_1      |     }
mongodb_1      | 2021-09-30T09:34:14.460+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T09:34:14.460+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
redis_exec_1   | 1:C 30 Sep 2021 10:53:17.139 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_exec_1   | 1:C 30 Sep 2021 10:53:17.139 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mongodb_1      | 2021-09-30T09:34:14.460+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T09:34:14.460+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
backend_1      |   }
backend_1      | }
mongodb_1      | 2021-09-30T09:34:14.460+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T09:34:14.460+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
redis_exec_1   | 1:M 30 Sep 2021 10:53:17.145 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 10:53:17.145 # Server initialized
mongodb_1      | 2021-09-30T09:34:14.460+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
backend_1      | [s3] Registered S3 storage backend
mongodb_1      | 2021-09-30T09:34:14.460+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
redis_exec_1   | 1:M 30 Sep 2021 10:53:17.145 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 10:53:17.145 * DB loaded from disk: 0.000 seconds
redis_exec_1   | 1:M 30 Sep 2021 10:53:17.145 * Ready to accept connections
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
redis_exec_1   | 1:signal-handler (1633000586) Received SIGTERM scheduling shutdown...
redis_exec_1   | 1:M 30 Sep 2021 11:16:26.240 # User requested shutdown...
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
redis_exec_1   | 1:M 30 Sep 2021 11:16:26.240 * Saving the final RDB snapshot before exiting.
redis_exec_1   | 1:M 30 Sep 2021 11:16:26.406 * DB saved on disk
mongodb_1      | 2021-09-30T09:34:14.460+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T09:34:14.460+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
redis_exec_1   | 1:M 30 Sep 2021 11:16:26.406 # Redis is now ready to exit, bye bye...
redis_exec_1   | 1:C 30 Sep 2021 11:17:50.729 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:7) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
redis_exec_1   | 1:C 30 Sep 2021 11:17:50.729 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_exec_1   | 1:C 30 Sep 2021 11:17:50.729 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mongodb_1      | 2021-09-30T09:34:14.460+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T09:34:14.460+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
redis_exec_1   | 1:M 30 Sep 2021 11:17:50.734 * Running mode=standalone, port=6379.
redis_exec_1   | 1:M 30 Sep 2021 11:17:50.734 # Server initialized
mongodb_1      | 2021-09-30T09:34:14.460+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T09:34:14.460+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
redis_exec_1   | 1:M 30 Sep 2021 11:17:50.734 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_exec_1   | 1:M 30 Sep 2021 11:17:50.734 * DB loaded from disk: 0.000 seconds
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
mongodb_1      | 2021-09-30T09:34:14.460+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
redis_exec_1   | 1:M 30 Sep 2021 11:17:50.734 * Ready to accept connections
backend_1      | [klicker-api] Shutting down server
mongodb_1      | 2021-09-30T09:34:14.461+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
backend_1      | [klicker-api] Successfully loaded configuration
mongodb_1      | 2021-09-30T09:34:14.463+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
backend_1      | {
mongodb_1      | 2021-09-30T09:34:14.463+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
backend_1      |   "app": {
mongodb_1      | 2021-09-30T09:34:14.463+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
backend_1      |     "baseUrl": "jscklick.zam.kfa-juelich.de",
mongodb_1      | 2021-09-30T09:34:14.472+0000 I NETWORK  [conn1] end connection 172.20.0.2:38404 (10 connections now open)
backend_1      |     "cookieDomain": "zam.kfa-juelich.de",
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn11] end connection 172.20.0.2:38424 (9 connections now open)
backend_1      |     "domain": "appklick.zam.kfa-juelich.de",
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn10] end connection 172.20.0.2:38422 (8 connections now open)
backend_1      |     "gzip": true,
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn6] end connection 172.20.0.2:38414 (7 connections now open)
backend_1      |     "https": true,
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn8] end connection 172.20.0.2:38418 (5 connections now open)
backend_1      |     "port": 4000,
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn2] end connection 172.20.0.2:38406 (4 connections now open)
backend_1      |     "secret": "[Sensitive]",
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn9] end connection 172.20.0.2:38420 (3 connections now open)
backend_1      |     "secure": true,
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn5] end connection 172.20.0.2:38412 (6 connections now open)
backend_1      |     "trustProxy": true
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn7] end connection 172.20.0.2:38416 (2 connections now open)
backend_1      |   },
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn3] end connection 172.20.0.2:38408 (1 connection now open)
backend_1      |   "cache": {
mongodb_1      | 2021-09-30T09:34:14.477+0000 I NETWORK  [conn4] end connection 172.20.0.2:38410 (0 connections now open)
backend_1      |     "redis": {
mongodb_1      | 2021-09-30T09:34:14.485+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
backend_1      |       "host": "redis_cache",
mongodb_1      | 2021-09-30T09:34:14.485+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
mongodb_1      | 2021-09-30T09:34:16.039+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
backend_1      |       "tls": false
mongodb_1      | 2021-09-30T09:34:16.040+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
backend_1      |     },
mongodb_1      | 2021-09-30T09:34:16.040+0000 I CONTROL  [signalProcessingThread] now exiting
backend_1      |     "exec": {
backend_1      |       "host": "redis_exec",
backend_1      |       "password": "[Sensitive]",
backend_1      |       "port": 6379,
backend_1      |       "tls": false
backend_1      |     }
backend_1      |   },
backend_1      |   "email": {
backend_1      |     "from": "g.schilling@fz-juelich.de",
backend_1      |     "host": "mail.fz-juelich.de",
backend_1      |     "port": 25,
backend_1      |     "user": "[Sensitive]",
backend_1      |     "password": "[Sensitive]",
backend_1      |     "secure": false
backend_1      |   },
backend_1      |   "env": "production",
backend_1      |   "mongo": {
backend_1      |     "database": "klicker",
backend_1      |     "debug": false,
backend_1      |     "url": "[Sensitive]",
backend_1      |     "user": "klicker",
backend_1      |     "password": "[Sensitive]"
backend_1      |   },
backend_1      |   "s3": {
backend_1      |     "accessKey": "[Sensitive]",
backend_1      |     "bucket": "images",
backend_1      |     "enabled": true,
backend_1      |     "endpoint": "https://s3klick.zam.kfa-juelich.de",
backend_1      |     "region": "eu-central-1",
backend_1      |     "secretKey": "[Sensitive]"
backend_1      |   },
backend_1      |   "security": {
backend_1      |     "cors": {
backend_1      |       "credentials": true,
backend_1      |       "origin": [
backend_1      |         "https://jscklick.zam.kfa-juelich.de"
backend_1      |       ]
backend_1      |     },
backend_1      |     "expectCt": {
backend_1      |       "enabled": false,
backend_1      |       "enforce": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "filtering": {
backend_1      |       "byIP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       },
backend_1      |       "byFP": {
backend_1      |         "enabled": true,
backend_1      |         "strict": false
backend_1      |       }
backend_1      |     },
backend_1      |     "frameguard": {
backend_1      |       "ancestors": [
backend_1      |         "'none'"
backend_1      |       ],
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "hsts": {
backend_1      |       "enabled": false,
backend_1      |       "includeSubdomains": false,
backend_1      |       "maxAge": 0
backend_1      |     },
backend_1      |     "rateLimit": {
backend_1      |       "enabled": true,
backend_1      |       "max": 2500,
backend_1      |       "windowMs": 300000
backend_1      |     }
backend_1      |   },
backend_1      |   "services": {
backend_1      |     "apm": {
backend_1      |       "enabled": false,
backend_1      |       "monitorDev": false,
backend_1      |       "secretToken": "[Sensitive]",
backend_1      |       "serviceName": "klicker-api"
backend_1      |     },
backend_1      |     "apolloEngine": {
backend_1      |       "apiKey": "[Sensitive]",
backend_1      |       "enabled": false
backend_1      |     },
backend_1      |     "sentry": {
backend_1      |       "enabled": false,
backend_1      |       "dsn": "[Sensitive]"
mongodb_1      | 2021-09-30T09:34:16.040+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
backend_1      |     },
backend_1      |     "slack": {
mongodb_1      |
backend_1      |       "enabled": false,
backend_1      |       "webhook": "[Sensitive]"
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
backend_1      |     }
backend_1      |   }
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
backend_1      | }
backend_1      | [s3] Registered S3 storage backend
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
backend_1      | [redis] Connected to cache redis
backend_1      | [redis] Connected to cache redis
mongodb_1      |
backend_1      | [redis] Connected to cache exec
backend_1      | [klicker-api] GraphQL ready on appklick.zam.kfa-juelich.de:4000/!
mongodb_1      | 2021-09-30T10:22:44.791+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
backend_1      | [mongo] Connection to MongoDB established.
backend_1      | (node:8) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
backend_1      | (Use `node --trace-deprecation ...` to show where the warning was created)
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T10:22:44.800+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T10:22:44.800+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T10:22:44.800+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T10:22:44.800+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T10:22:44.800+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T10:22:44.800+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T10:22:46.126+0000 I STORAGE  [initandlisten] WiredTiger message [1632997366:126020][1:0x7ffa2bf2ba80], txn-recover: Main recovery loop: starting at 2/69888 to 3/256
mongodb_1      | 2021-09-30T10:22:46.236+0000 I STORAGE  [initandlisten] WiredTiger message [1632997366:236774][1:0x7ffa2bf2ba80], txn-recover: Recovering log 2 through 3
mongodb_1      | 2021-09-30T10:22:46.475+0000 I STORAGE  [initandlisten] WiredTiger message [1632997366:475125][1:0x7ffa2bf2ba80], txn-recover: Recovering log 3 through 3
mongodb_1      | 2021-09-30T10:22:46.526+0000 I STORAGE  [initandlisten] WiredTiger message [1632997366:526935][1:0x7ffa2bf2ba80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T10:22:47.491+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T10:22:47.492+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:22:47.551+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:22:47.553+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T10:22:47.556+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T10:22:51.040+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56142 #1 (1 connection now open)
mongodb_1      | 2021-09-30T10:22:51.047+0000 I NETWORK  [conn1] received client metadata from 172.20.0.5:56142 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.057+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56150 #2 (2 connections now open)
mongodb_1      | 2021-09-30T10:22:51.059+0000 I NETWORK  [conn2] received client metadata from 172.20.0.5:56150 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.079+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56150
mongodb_1      | 2021-09-30T10:22:51.097+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56152 #3 (3 connections now open)
mongodb_1      | 2021-09-30T10:22:51.098+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56154 #4 (4 connections now open)
mongodb_1      | 2021-09-30T10:22:51.098+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56156 #5 (5 connections now open)
mongodb_1      | 2021-09-30T10:22:51.098+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56158 #6 (6 connections now open)
mongodb_1      | 2021-09-30T10:22:51.098+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56160 #7 (7 connections now open)
mongodb_1      | 2021-09-30T10:22:51.099+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56162 #8 (8 connections now open)
mongodb_1      | 2021-09-30T10:22:51.099+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56164 #9 (9 connections now open)
mongodb_1      | 2021-09-30T10:22:51.103+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56166 #10 (10 connections now open)
mongodb_1      | 2021-09-30T10:22:51.103+0000 I NETWORK  [conn3] received client metadata from 172.20.0.5:56152 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.103+0000 I NETWORK  [conn4] received client metadata from 172.20.0.5:56154 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.104+0000 I NETWORK  [conn5] received client metadata from 172.20.0.5:56156 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.104+0000 I NETWORK  [conn6] received client metadata from 172.20.0.5:56158 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.105+0000 I NETWORK  [conn7] received client metadata from 172.20.0.5:56160 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.105+0000 I NETWORK  [conn8] received client metadata from 172.20.0.5:56162 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.106+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:56168 #11 (11 connections now open)
mongodb_1      | 2021-09-30T10:22:51.107+0000 I NETWORK  [conn9] received client metadata from 172.20.0.5:56164 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.111+0000 I NETWORK  [conn10] received client metadata from 172.20.0.5:56166 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.116+0000 I NETWORK  [conn11] received client metadata from 172.20.0.5:56168 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:22:51.117+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56152
mongodb_1      | 2021-09-30T10:22:51.117+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56154
mongodb_1      | 2021-09-30T10:22:51.117+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56156
mongodb_1      | 2021-09-30T10:22:51.118+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56158
mongodb_1      | 2021-09-30T10:22:51.118+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56160
mongodb_1      | 2021-09-30T10:22:51.118+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56162
mongodb_1      | 2021-09-30T10:22:51.123+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56164
mongodb_1      | 2021-09-30T10:22:51.135+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56168
mongodb_1      | 2021-09-30T10:22:51.135+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.5:56166
mongodb_1      | 2021-09-30T10:24:37.470+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T10:24:37.470+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T10:24:37.470+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T10:24:37.471+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T10:24:37.471+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T10:24:37.471+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T10:24:37.471+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T10:24:37.471+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T10:24:37.471+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T10:24:37.471+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T10:24:37.471+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T10:24:37.471+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T10:24:37.471+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T10:24:37.471+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:24:37.472+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:24:37.472+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T10:24:37.472+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T10:24:37.475+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T10:24:37.475+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T10:24:37.475+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T10:24:37.475+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:24:37.475+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:24:37.486+0000 I NETWORK  [conn1] end connection 172.20.0.5:56142 (10 connections now open)
mongodb_1      | 2021-09-30T10:24:37.491+0000 I NETWORK  [conn10] end connection 172.20.0.5:56166 (9 connections now open)
mongodb_1      | 2021-09-30T10:24:37.491+0000 I NETWORK  [conn9] end connection 172.20.0.5:56164 (8 connections now open)
mongodb_1      | 2021-09-30T10:24:37.491+0000 I NETWORK  [conn11] end connection 172.20.0.5:56168 (7 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn8] end connection 172.20.0.5:56162 (6 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn7] end connection 172.20.0.5:56160 (5 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn6] end connection 172.20.0.5:56158 (4 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn5] end connection 172.20.0.5:56156 (3 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn4] end connection 172.20.0.5:56154 (2 connections now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn2] end connection 172.20.0.5:56150 (1 connection now open)
mongodb_1      | 2021-09-30T10:24:37.492+0000 I NETWORK  [conn3] end connection 172.20.0.5:56152 (0 connections now open)
mongodb_1      | 2021-09-30T10:24:38.807+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T10:24:38.807+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T10:24:38.807+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T10:24:38.807+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | 2021-09-30T10:26:10.574+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T10:26:10.578+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T10:26:10.623+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T10:26:10.624+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T10:26:10.624+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T10:26:10.624+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T10:26:10.624+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T10:26:11.703+0000 I STORAGE  [initandlisten] WiredTiger message [1632997571:702992][1:0x7f59e45b6a80], txn-recover: Main recovery loop: starting at 3/6016 to 4/256
mongodb_1      | 2021-09-30T10:26:11.873+0000 I STORAGE  [initandlisten] WiredTiger message [1632997571:873398][1:0x7f59e45b6a80], txn-recover: Recovering log 3 through 4
mongodb_1      | 2021-09-30T10:26:11.995+0000 I STORAGE  [initandlisten] WiredTiger message [1632997571:995518][1:0x7f59e45b6a80], txn-recover: Recovering log 4 through 4
mongodb_1      | 2021-09-30T10:26:12.059+0000 I STORAGE  [initandlisten] WiredTiger message [1632997572:59678][1:0x7f59e45b6a80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T10:26:13.106+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T10:26:13.107+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:26:13.166+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:26:13.168+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T10:26:13.171+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T10:26:13.380+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35682 #1 (1 connection now open)
mongodb_1      | 2021-09-30T10:26:13.386+0000 I NETWORK  [conn1] received client metadata from 172.20.0.5:35682 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.397+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35690 #2 (2 connections now open)
mongodb_1      | 2021-09-30T10:26:13.399+0000 I NETWORK  [conn2] received client metadata from 172.20.0.5:35690 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.421+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35690
mongodb_1      | 2021-09-30T10:26:13.438+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35692 #3 (3 connections now open)
mongodb_1      | 2021-09-30T10:26:13.438+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35694 #4 (4 connections now open)
mongodb_1      | 2021-09-30T10:26:13.438+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35696 #5 (5 connections now open)
mongodb_1      | 2021-09-30T10:26:13.439+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35698 #6 (6 connections now open)
mongodb_1      | 2021-09-30T10:26:13.439+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35700 #7 (7 connections now open)
mongodb_1      | 2021-09-30T10:26:13.439+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35702 #8 (8 connections now open)
mongodb_1      | 2021-09-30T10:26:13.440+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35704 #9 (9 connections now open)
mongodb_1      | 2021-09-30T10:26:13.443+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35706 #10 (10 connections now open)
mongodb_1      | 2021-09-30T10:26:13.444+0000 I NETWORK  [conn3] received client metadata from 172.20.0.5:35692 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.444+0000 I NETWORK  [conn4] received client metadata from 172.20.0.5:35694 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.444+0000 I NETWORK  [conn5] received client metadata from 172.20.0.5:35696 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.445+0000 I NETWORK  [conn6] received client metadata from 172.20.0.5:35698 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.445+0000 I NETWORK  [conn7] received client metadata from 172.20.0.5:35700 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.446+0000 I NETWORK  [conn8] received client metadata from 172.20.0.5:35702 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.447+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:35708 #11 (11 connections now open)
mongodb_1      | 2021-09-30T10:26:13.447+0000 I NETWORK  [conn9] received client metadata from 172.20.0.5:35704 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.452+0000 I NETWORK  [conn10] received client metadata from 172.20.0.5:35706 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.458+0000 I NETWORK  [conn11] received client metadata from 172.20.0.5:35708 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:26:13.458+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35692
mongodb_1      | 2021-09-30T10:26:13.458+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35694
mongodb_1      | 2021-09-30T10:26:13.458+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35696
mongodb_1      | 2021-09-30T10:26:13.459+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35698
mongodb_1      | 2021-09-30T10:26:13.459+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35700
mongodb_1      | 2021-09-30T10:26:13.460+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35702
mongodb_1      | 2021-09-30T10:26:13.464+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35704
mongodb_1      | 2021-09-30T10:26:13.476+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35708
mongodb_1      | 2021-09-30T10:26:13.477+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.5:35706
mongodb_1      | 2021-09-30T10:27:13.046+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T10:27:13.046+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T10:27:13.046+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T10:27:13.046+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T10:27:13.046+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T10:27:13.046+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T10:27:13.046+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T10:27:13.046+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T10:27:13.047+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T10:27:13.047+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T10:27:13.047+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T10:27:13.047+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T10:27:13.047+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T10:27:13.047+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:27:13.047+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:27:13.047+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T10:27:13.047+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T10:27:13.049+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T10:27:13.049+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T10:27:13.049+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T10:27:13.059+0000 I NETWORK  [conn1] end connection 172.20.0.5:35682 (10 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn10] end connection 172.20.0.5:35706 (9 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn11] end connection 172.20.0.5:35708 (8 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn8] end connection 172.20.0.5:35702 (7 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn6] end connection 172.20.0.5:35698 (5 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn7] end connection 172.20.0.5:35700 (6 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn5] end connection 172.20.0.5:35696 (4 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn4] end connection 172.20.0.5:35694 (3 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn2] end connection 172.20.0.5:35690 (2 connections now open)
mongodb_1      | 2021-09-30T10:27:13.065+0000 I NETWORK  [conn3] end connection 172.20.0.5:35692 (1 connection now open)
mongodb_1      | 2021-09-30T10:27:13.066+0000 I NETWORK  [conn9] end connection 172.20.0.5:35704 (0 connections now open)
mongodb_1      | 2021-09-30T10:27:13.109+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:27:13.110+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:27:14.165+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T10:27:14.165+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T10:27:14.165+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T10:27:14.165+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | 2021-09-30T10:28:50.595+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T10:28:50.607+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T10:28:50.608+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T10:28:50.608+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T10:28:50.608+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T10:28:50.608+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T10:28:50.608+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T10:28:50.608+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T10:28:51.690+0000 I STORAGE  [initandlisten] WiredTiger message [1632997731:690878][1:0x7f20cdadea80], txn-recover: Main recovery loop: starting at 4/4864 to 5/256
mongodb_1      | 2021-09-30T10:28:51.803+0000 I STORAGE  [initandlisten] WiredTiger message [1632997731:803349][1:0x7f20cdadea80], txn-recover: Recovering log 4 through 5
mongodb_1      | 2021-09-30T10:28:52.011+0000 I STORAGE  [initandlisten] WiredTiger message [1632997732:11425][1:0x7f20cdadea80], txn-recover: Recovering log 5 through 5
mongodb_1      | 2021-09-30T10:28:52.074+0000 I STORAGE  [initandlisten] WiredTiger message [1632997732:74872][1:0x7f20cdadea80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T10:28:52.471+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T10:28:52.473+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:28:52.531+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:28:52.534+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T10:28:52.539+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T10:28:52.814+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47696 #1 (1 connection now open)
mongodb_1      | 2021-09-30T10:28:52.814+0000 I NETWORK  [conn1] received client metadata from 172.20.0.2:47696 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.820+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47698 #2 (2 connections now open)
mongodb_1      | 2021-09-30T10:28:52.823+0000 I NETWORK  [conn2] received client metadata from 172.20.0.2:47698 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.846+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47698
mongodb_1      | 2021-09-30T10:28:52.867+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47700 #3 (3 connections now open)
mongodb_1      | 2021-09-30T10:28:52.868+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47702 #4 (4 connections now open)
mongodb_1      | 2021-09-30T10:28:52.869+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47704 #5 (5 connections now open)
mongodb_1      | 2021-09-30T10:28:52.869+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47706 #6 (6 connections now open)
mongodb_1      | 2021-09-30T10:28:52.870+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47708 #7 (7 connections now open)
mongodb_1      | 2021-09-30T10:28:52.870+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47710 #8 (8 connections now open)
mongodb_1      | 2021-09-30T10:28:52.871+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47712 #9 (9 connections now open)
mongodb_1      | 2021-09-30T10:28:52.872+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47714 #10 (10 connections now open)
mongodb_1      | 2021-09-30T10:28:52.873+0000 I NETWORK  [conn3] received client metadata from 172.20.0.2:47700 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.874+0000 I NETWORK  [listener] connection accepted from 172.20.0.2:47716 #11 (11 connections now open)
mongodb_1      | 2021-09-30T10:28:52.874+0000 I NETWORK  [conn5] received client metadata from 172.20.0.2:47704 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.874+0000 I NETWORK  [conn6] received client metadata from 172.20.0.2:47706 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.875+0000 I NETWORK  [conn4] received client metadata from 172.20.0.2:47702 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.876+0000 I NETWORK  [conn8] received client metadata from 172.20.0.2:47710 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.876+0000 I NETWORK  [conn9] received client metadata from 172.20.0.2:47712 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.878+0000 I NETWORK  [conn10] received client metadata from 172.20.0.2:47714 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.879+0000 I NETWORK  [conn7] received client metadata from 172.20.0.2:47708 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.882+0000 I NETWORK  [conn11] received client metadata from 172.20.0.2:47716 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:28:52.889+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47700
mongodb_1      | 2021-09-30T10:28:52.890+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47704
mongodb_1      | 2021-09-30T10:28:52.890+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47706
mongodb_1      | 2021-09-30T10:28:52.891+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47710
mongodb_1      | 2021-09-30T10:28:52.891+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47712
mongodb_1      | 2021-09-30T10:28:52.898+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47714
mongodb_1      | 2021-09-30T10:28:52.898+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47708
mongodb_1      | 2021-09-30T10:28:52.899+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47702
mongodb_1      | 2021-09-30T10:28:52.916+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.2:47716
mongodb_1      | 2021-09-30T10:30:09.851+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T10:30:09.851+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T10:30:09.851+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T10:30:09.851+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T10:30:09.851+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T10:30:09.851+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T10:30:09.851+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T10:30:09.851+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T10:30:09.852+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T10:30:09.852+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T10:30:09.852+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T10:30:09.852+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T10:30:09.852+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T10:30:09.852+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:30:09.852+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:30:09.852+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T10:30:09.852+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T10:30:09.853+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T10:30:09.854+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T10:30:09.854+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T10:30:09.854+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:30:09.854+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:30:09.854+0000 I NETWORK  [conn1] end connection 172.20.0.2:47696 (10 connections now open)
mongodb_1      | 2021-09-30T10:30:09.861+0000 I NETWORK  [conn10] end connection 172.20.0.2:47714 (9 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn9] end connection 172.20.0.2:47712 (8 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn11] end connection 172.20.0.2:47716 (7 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn2] end connection 172.20.0.2:47698 (6 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn4] end connection 172.20.0.2:47702 (5 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn3] end connection 172.20.0.2:47700 (4 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn7] end connection 172.20.0.2:47708 (3 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn5] end connection 172.20.0.2:47704 (2 connections now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn8] end connection 172.20.0.2:47710 (1 connection now open)
mongodb_1      | 2021-09-30T10:30:09.862+0000 I NETWORK  [conn6] end connection 172.20.0.2:47706 (0 connections now open)
mongodb_1      | 2021-09-30T10:30:10.666+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T10:30:10.666+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T10:30:10.666+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T10:30:10.666+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | 2021-09-30T10:41:31.640+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T10:41:31.738+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T10:41:31.747+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T10:41:31.747+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T10:41:31.748+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T10:41:31.748+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T10:41:31.748+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T10:41:33.275+0000 I STORAGE  [initandlisten] WiredTiger message [1632998493:275903][1:0x7f8bdda97a80], txn-recover: Main recovery loop: starting at 5/6016 to 6/256
mongodb_1      | 2021-09-30T10:41:33.378+0000 I STORAGE  [initandlisten] WiredTiger message [1632998493:378705][1:0x7f8bdda97a80], txn-recover: Recovering log 5 through 6
mongodb_1      | 2021-09-30T10:41:33.808+0000 I STORAGE  [initandlisten] WiredTiger message [1632998493:808707][1:0x7f8bdda97a80], txn-recover: Recovering log 6 through 6
mongodb_1      | 2021-09-30T10:41:33.859+0000 I STORAGE  [initandlisten] WiredTiger message [1632998493:859943][1:0x7f8bdda97a80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T10:41:35.010+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T10:41:35.024+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:41:35.453+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:41:35.497+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T10:41:35.528+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T10:41:45.692+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34856 #1 (1 connection now open)
mongodb_1      | 2021-09-30T10:41:45.701+0000 I NETWORK  [conn1] received client metadata from 172.20.0.5:34856 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.713+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34864 #2 (2 connections now open)
mongodb_1      | 2021-09-30T10:41:45.724+0000 I NETWORK  [conn2] received client metadata from 172.20.0.5:34864 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.746+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34864
mongodb_1      | 2021-09-30T10:41:45.763+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34866 #3 (3 connections now open)
mongodb_1      | 2021-09-30T10:41:45.764+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34868 #4 (4 connections now open)
mongodb_1      | 2021-09-30T10:41:45.765+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34870 #5 (5 connections now open)
mongodb_1      | 2021-09-30T10:41:45.765+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34872 #6 (6 connections now open)
mongodb_1      | 2021-09-30T10:41:45.765+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34874 #7 (7 connections now open)
mongodb_1      | 2021-09-30T10:41:45.765+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34876 #8 (8 connections now open)
mongodb_1      | 2021-09-30T10:41:45.766+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34878 #9 (9 connections now open)
mongodb_1      | 2021-09-30T10:41:45.769+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34880 #10 (10 connections now open)
mongodb_1      | 2021-09-30T10:41:45.770+0000 I NETWORK  [conn3] received client metadata from 172.20.0.5:34866 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.770+0000 I NETWORK  [conn4] received client metadata from 172.20.0.5:34868 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.770+0000 I NETWORK  [conn5] received client metadata from 172.20.0.5:34870 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.771+0000 I NETWORK  [conn6] received client metadata from 172.20.0.5:34872 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.771+0000 I NETWORK  [conn7] received client metadata from 172.20.0.5:34874 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.771+0000 I NETWORK  [conn8] received client metadata from 172.20.0.5:34876 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.773+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:34882 #11 (11 connections now open)
mongodb_1      | 2021-09-30T10:41:45.773+0000 I NETWORK  [conn9] received client metadata from 172.20.0.5:34878 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.777+0000 I NETWORK  [conn10] received client metadata from 172.20.0.5:34880 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.783+0000 I NETWORK  [conn11] received client metadata from 172.20.0.5:34882 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:41:45.783+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34866
mongodb_1      | 2021-09-30T10:41:45.783+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34868
mongodb_1      | 2021-09-30T10:41:45.784+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34870
mongodb_1      | 2021-09-30T10:41:45.784+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34872
mongodb_1      | 2021-09-30T10:41:45.784+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34874
mongodb_1      | 2021-09-30T10:41:45.785+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34876
mongodb_1      | 2021-09-30T10:41:45.789+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34878
mongodb_1      | 2021-09-30T10:41:45.800+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34882
mongodb_1      | 2021-09-30T10:41:45.800+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.5:34880
mongodb_1      | 2021-09-30T10:41:45.900+0000 I COMMAND  [conn5] command klicker.$cmd command: createIndexes { createIndexes: "files", indexes: [ { name: "$**_1", key: { $**: 1 }, background: true } ], lsid: { id: UUID("c802f81a-d3f8-440a-bc13-d6c14ce9b59c") }, $db: "klicker" } numYields:0 ok:0 errMsg:"Index key contains an illegal field name: field name starts with '$'." errName:CannotCreateIndex errCode:67 reslen:162 locks:{} protocol:op_msg 105ms
mongodb_1      | 2021-09-30T10:41:45.901+0000 I COMMAND  [conn9] command klicker.$cmd command: createIndexes { createIndexes: "users", indexes: [ { name: "$**_1", key: { $**: 1 }, background: true } ], lsid: { id: UUID("47016db2-b57c-4e47-a4a4-6e91f29e8da2") }, $db: "klicker" } numYields:0 ok:0 errMsg:"Index key contains an illegal field name: field name starts with '$'." errName:CannotCreateIndex errCode:67 reslen:162 locks:{} protocol:op_msg 102ms
mongodb_1      | 2021-09-30T10:52:03.902+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T10:52:03.902+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T10:52:03.902+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T10:52:03.902+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T10:52:03.902+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T10:52:03.902+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T10:52:03.903+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T10:52:03.903+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T10:52:03.903+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T10:52:03.903+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T10:52:03.903+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T10:52:03.903+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T10:52:03.903+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T10:52:03.903+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:52:03.903+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T10:52:03.903+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T10:52:03.903+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T10:52:03.906+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T10:52:03.906+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T10:52:03.906+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T10:52:03.906+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:52:03.906+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T10:52:03.921+0000 I NETWORK  [conn1] end connection 172.20.0.5:34856 (10 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn10] end connection 172.20.0.5:34880 (9 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn11] end connection 172.20.0.5:34882 (7 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn8] end connection 172.20.0.5:34876 (6 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn5] end connection 172.20.0.5:34870 (5 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn9] end connection 172.20.0.5:34878 (8 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn6] end connection 172.20.0.5:34872 (4 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn7] end connection 172.20.0.5:34874 (3 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn3] end connection 172.20.0.5:34866 (2 connections now open)
mongodb_1      | 2021-09-30T10:52:03.927+0000 I NETWORK  [conn4] end connection 172.20.0.5:34868 (1 connection now open)
mongodb_1      | 2021-09-30T10:52:03.928+0000 I NETWORK  [conn2] end connection 172.20.0.5:34864 (0 connections now open)
mongodb_1      | 2021-09-30T10:52:04.244+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T10:52:04.244+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T10:52:04.244+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T10:52:04.244+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | 2021-09-30T10:53:14.518+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T10:53:14.522+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T10:53:15.270+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T10:53:15.270+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T10:53:15.270+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T10:53:15.270+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T10:53:15.270+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T10:53:16.465+0000 I STORAGE  [initandlisten] WiredTiger message [1632999196:465278][1:0x7f5059726a80], txn-recover: Main recovery loop: starting at 6/11136 to 7/256
mongodb_1      | 2021-09-30T10:53:16.570+0000 I STORAGE  [initandlisten] WiredTiger message [1632999196:570548][1:0x7f5059726a80], txn-recover: Recovering log 6 through 7
mongodb_1      | 2021-09-30T10:53:16.692+0000 I STORAGE  [initandlisten] WiredTiger message [1632999196:692336][1:0x7f5059726a80], txn-recover: Recovering log 7 through 7
mongodb_1      | 2021-09-30T10:53:16.746+0000 I STORAGE  [initandlisten] WiredTiger message [1632999196:746703][1:0x7f5059726a80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T10:53:17.351+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T10:53:17.352+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:53:17.672+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T10:53:17.674+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T10:53:17.683+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T10:53:20.287+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33000 #1 (1 connection now open)
mongodb_1      | 2021-09-30T10:53:20.294+0000 I NETWORK  [conn1] received client metadata from 172.20.0.5:33000 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.304+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33008 #2 (2 connections now open)
mongodb_1      | 2021-09-30T10:53:20.306+0000 I NETWORK  [conn2] received client metadata from 172.20.0.5:33008 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.326+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33008
mongodb_1      | 2021-09-30T10:53:20.344+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33010 #3 (3 connections now open)
mongodb_1      | 2021-09-30T10:53:20.344+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33012 #4 (4 connections now open)
mongodb_1      | 2021-09-30T10:53:20.344+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33014 #5 (5 connections now open)
mongodb_1      | 2021-09-30T10:53:20.345+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33016 #6 (6 connections now open)
mongodb_1      | 2021-09-30T10:53:20.345+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33018 #7 (7 connections now open)
mongodb_1      | 2021-09-30T10:53:20.345+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33020 #8 (8 connections now open)
mongodb_1      | 2021-09-30T10:53:20.349+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33022 #9 (9 connections now open)
mongodb_1      | 2021-09-30T10:53:20.349+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33024 #10 (10 connections now open)
mongodb_1      | 2021-09-30T10:53:20.349+0000 I NETWORK  [conn3] received client metadata from 172.20.0.5:33010 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.350+0000 I NETWORK  [conn4] received client metadata from 172.20.0.5:33012 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.350+0000 I NETWORK  [conn5] received client metadata from 172.20.0.5:33014 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.351+0000 I NETWORK  [conn6] received client metadata from 172.20.0.5:33016 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.351+0000 I NETWORK  [conn7] received client metadata from 172.20.0.5:33018 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.351+0000 I NETWORK  [conn8] received client metadata from 172.20.0.5:33020 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.352+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33026 #11 (11 connections now open)
mongodb_1      | 2021-09-30T10:53:20.356+0000 I NETWORK  [conn9] received client metadata from 172.20.0.5:33022 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.357+0000 I NETWORK  [conn10] received client metadata from 172.20.0.5:33024 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.361+0000 I NETWORK  [conn11] received client metadata from 172.20.0.5:33026 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T10:53:20.362+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33010
mongodb_1      | 2021-09-30T10:53:20.362+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33012
mongodb_1      | 2021-09-30T10:53:20.362+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33014
mongodb_1      | 2021-09-30T10:53:20.363+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33016
mongodb_1      | 2021-09-30T10:53:20.363+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33018
mongodb_1      | 2021-09-30T10:53:20.367+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33020
mongodb_1      | 2021-09-30T10:53:20.374+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33022
mongodb_1      | 2021-09-30T10:53:20.380+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33024
mongodb_1      | 2021-09-30T10:53:20.382+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33026
mongodb_1      | 2021-09-30T11:16:26.205+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
mongodb_1      | 2021-09-30T11:16:26.205+0000 I CONTROL  [signalProcessingThread] Shutdown started
mongodb_1      | 2021-09-30T11:16:26.205+0000 I REPL     [signalProcessingThread] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms
mongodb_1      | 2021-09-30T11:16:26.205+0000 I CONTROL  [signalProcessingThread] Shutting down the LogicalSessionCache
mongodb_1      | 2021-09-30T11:16:26.207+0000 I NETWORK  [conn1] end connection 172.20.0.5:33000 (10 connections now open)
mongodb_1      | 2021-09-30T11:16:26.207+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
mongodb_1      | 2021-09-30T11:16:26.207+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
mongodb_1      | 2021-09-30T11:16:26.207+0000 I NETWORK  [signalProcessingThread] Shutting down the global connection pool
mongodb_1      | 2021-09-30T11:16:26.207+0000 I STORAGE  [signalProcessingThread] Shutting down the PeriodicThreadToAbortExpiredTransactions
mongodb_1      | 2021-09-30T11:16:26.207+0000 I REPL     [signalProcessingThread] Shutting down the ReplicationCoordinator
mongodb_1      | 2021-09-30T11:16:26.207+0000 I SHARDING [signalProcessingThread] Shutting down the ShardingInitializationMongoD
mongodb_1      | 2021-09-30T11:16:26.207+0000 I COMMAND  [signalProcessingThread] Killing all open transactions
mongodb_1      | 2021-09-30T11:16:26.207+0000 I -        [signalProcessingThread] Killing all operations for shutdown
mongodb_1      | 2021-09-30T11:16:26.207+0000 I NETWORK  [signalProcessingThread] Shutting down the ReplicaSetMonitor
mongodb_1      | 2021-09-30T11:16:26.207+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T11:16:26.207+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
mongodb_1      | 2021-09-30T11:16:26.208+0000 I FTDC     [signalProcessingThread] Shutting down full-time data capture
mongodb_1      | 2021-09-30T11:16:26.208+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
mongodb_1      | 2021-09-30T11:16:26.210+0000 I STORAGE  [signalProcessingThread] Shutting down the HealthLog
mongodb_1      | 2021-09-30T11:16:26.210+0000 I STORAGE  [signalProcessingThread] Shutting down the storage engine
mongodb_1      | 2021-09-30T11:16:26.210+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
mongodb_1      | 2021-09-30T11:16:26.210+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
mongodb_1      | 2021-09-30T11:16:26.210+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn10] end connection 172.20.0.5:33024 (9 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn9] end connection 172.20.0.5:33022 (8 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn3] end connection 172.20.0.5:33010 (7 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn5] end connection 172.20.0.5:33014 (6 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn8] end connection 172.20.0.5:33020 (5 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn2] end connection 172.20.0.5:33008 (3 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn7] end connection 172.20.0.5:33018 (2 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn6] end connection 172.20.0.5:33016 (0 connections now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn11] end connection 172.20.0.5:33026 (1 connection now open)
mongodb_1      | 2021-09-30T11:16:26.212+0000 I NETWORK  [conn4] end connection 172.20.0.5:33012 (4 connections now open)
mongodb_1      | 2021-09-30T11:16:26.725+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
mongodb_1      | 2021-09-30T11:16:26.725+0000 I -        [signalProcessingThread] Dropping the scope cache for shutdown
mongodb_1      | 2021-09-30T11:16:26.725+0000 I CONTROL  [signalProcessingThread] now exiting
mongodb_1      | 2021-09-30T11:16:26.725+0000 I CONTROL  [signalProcessingThread] shutting down with code:0
mongodb_1      |
mongodb_1      | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongodb_1      |   see https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongodb_1      |   see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongodb_1      |
mongodb_1      | 2021-09-30T11:17:49.774+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ba63467807ae
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] db version v4.0.27
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] modules: none
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] build environment:
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1      | 2021-09-30T11:17:49.782+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, security: { authorization: "enabled" } }
mongodb_1      | 2021-09-30T11:17:49.782+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongodb_1      | 2021-09-30T11:17:49.783+0000 I STORAGE  [initandlisten]
mongodb_1      | 2021-09-30T11:17:49.783+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1      | 2021-09-30T11:17:49.783+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1      | 2021-09-30T11:17:49.783+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=481M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1      | 2021-09-30T11:17:50.796+0000 I STORAGE  [initandlisten] WiredTiger message [1633000670:796158][1:0x7fea6c647a80], txn-recover: Main recovery loop: starting at 7/12544 to 8/256
mongodb_1      | 2021-09-30T11:17:50.898+0000 I STORAGE  [initandlisten] WiredTiger message [1633000670:898832][1:0x7fea6c647a80], txn-recover: Recovering log 7 through 8
mongodb_1      | 2021-09-30T11:17:51.155+0000 I STORAGE  [initandlisten] WiredTiger message [1633000671:155592][1:0x7fea6c647a80], txn-recover: Recovering log 8 through 8
mongodb_1      | 2021-09-30T11:17:51.208+0000 I STORAGE  [initandlisten] WiredTiger message [1633000671:208586][1:0x7fea6c647a80], txn-recover: Set global recovery timestamp: 0
mongodb_1      | 2021-09-30T11:17:51.891+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1      | 2021-09-30T11:17:51.902+0000 I STORAGE  [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T11:17:52.096+0000 I STORAGE  [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongodb_1      | 2021-09-30T11:17:52.106+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1      | 2021-09-30T11:17:52.111+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1      | 2021-09-30T11:17:53.796+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33036 #1 (1 connection now open)
mongodb_1      | 2021-09-30T11:17:53.803+0000 I NETWORK  [conn1] received client metadata from 172.20.0.5:33036 conn1: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.814+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33044 #2 (2 connections now open)
mongodb_1      | 2021-09-30T11:17:53.816+0000 I NETWORK  [conn2] received client metadata from 172.20.0.5:33044 conn2: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.837+0000 I ACCESS   [conn2] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33044
mongodb_1      | 2021-09-30T11:17:53.855+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33046 #3 (3 connections now open)
mongodb_1      | 2021-09-30T11:17:53.856+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33048 #4 (4 connections now open)
mongodb_1      | 2021-09-30T11:17:53.856+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33050 #5 (5 connections now open)
mongodb_1      | 2021-09-30T11:17:53.856+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33052 #6 (6 connections now open)
mongodb_1      | 2021-09-30T11:17:53.857+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33054 #7 (7 connections now open)
mongodb_1      | 2021-09-30T11:17:53.857+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33056 #8 (8 connections now open)
mongodb_1      | 2021-09-30T11:17:53.857+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33058 #9 (9 connections now open)
mongodb_1      | 2021-09-30T11:17:53.861+0000 I NETWORK  [conn3] received client metadata from 172.20.0.5:33046 conn3: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.861+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33060 #10 (10 connections now open)
mongodb_1      | 2021-09-30T11:17:53.861+0000 I NETWORK  [conn4] received client metadata from 172.20.0.5:33048 conn4: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.862+0000 I NETWORK  [conn6] received client metadata from 172.20.0.5:33052 conn6: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.862+0000 I NETWORK  [conn7] received client metadata from 172.20.0.5:33054 conn7: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.863+0000 I NETWORK  [conn8] received client metadata from 172.20.0.5:33056 conn8: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.863+0000 I NETWORK  [conn5] received client metadata from 172.20.0.5:33050 conn5: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.864+0000 I NETWORK  [listener] connection accepted from 172.20.0.5:33062 #11 (11 connections now open)
mongodb_1      | 2021-09-30T11:17:53.865+0000 I NETWORK  [conn9] received client metadata from 172.20.0.5:33058 conn9: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.868+0000 I NETWORK  [conn10] received client metadata from 172.20.0.5:33060 conn10: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.874+0000 I NETWORK  [conn11] received client metadata from 172.20.0.5:33062 conn11: { driver: { name: "nodejs|Mongoose", version: "3.6.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "5.4.0-88-generic" }, platform: "'Node.js v14.17.6, LE (unified)", version: "3.6.11|5.13.7" }
mongodb_1      | 2021-09-30T11:17:53.874+0000 I ACCESS   [conn4] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33048
mongodb_1      | 2021-09-30T11:17:53.874+0000 I ACCESS   [conn3] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33046
mongodb_1      | 2021-09-30T11:17:53.875+0000 I ACCESS   [conn6] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33052
mongodb_1      | 2021-09-30T11:17:53.875+0000 I ACCESS   [conn7] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33054
mongodb_1      | 2021-09-30T11:17:53.875+0000 I ACCESS   [conn8] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33056
mongodb_1      | 2021-09-30T11:17:53.880+0000 I ACCESS   [conn5] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33050
mongodb_1      | 2021-09-30T11:17:53.880+0000 I ACCESS   [conn9] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33058
mongodb_1      | 2021-09-30T11:17:53.891+0000 I ACCESS   [conn10] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33060
mongodb_1      | 2021-09-30T11:17:53.892+0000 I ACCESS   [conn11] Successfully authenticated as principal klicker on admin from client 172.20.0.5:33062
root@jscklick:~/neu#

The credentials should not be critical, as I initially use the default values on the test system.

The main argument why we want to try out your software is the data protection aspect. If a few features in the area of gamefication are added (a kind of best list after the end of the session, for example) there is no longer any reason for commercial products whose data is somewhere abroad or is evaluated for advertising reasons.

Cheers, Schorsch

rschlaefli commented 2 years ago

Hi @georgschilling,

Thanks for the logs. I can't find any visible issue that should prevent the services from running. How does your proxy setup work, is it in Docker Compose (e.g., an Nginx container), or external to Docker? That the container ports (3000 and 4000) do not show up in Netstat is working as expected, as the ports are not published in our Compose template (you would need to uncomment them). I created the template with the intention of the proxy being within Compose so that it could access the services via their Docker name from within the same Docker network.

In case you are using an external Proxy (e.g., installed on the host OS), you could, e.g., publish the container ports by uncommenting the relevant lines in Docker Compose. They will then show up in Netstat, and can be routed to from the Proxy as well.

Not sure whether that is the actual problem but I was unable to find any clear issues in the logs. Please let me know if that still does not solve the problem.

Best, Roland

georgschilling commented 2 years ago

Good morning @rschlaefli, many thanks for your response. My feeling tells me that it is due to the network configuration. Currently, exactly those services that cannot be reached are in a state beyond "normal":

root @ jscklick: ~ / new # docker-compose up -d
Starting new_backend_1 ... done
Starting new_redis_cache_1 ... done
Starting new_mongodb_1 ... done
Starting neu_redis_exec_1 ... done
Starting new_minio_1 ... done
Starting neu_frontend_1 ... done
root @ jscklick: ~ / new #
root @ jscklick: ~ / new # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27052f47b02a ghcr.io/uzh-bf/klicker-uzh/frontend:v1.6.1 "/ sbin / tini - node…" 3 days ago Up 6 seconds (health: starting) 3000 / tcp neu_frontend_1
bb9539061d26 quay.io/minio/minio:RELEASE.2021-09-15T04-54-25Z "/ usr / bin / docker-ent…" 3 days ago Up 7 seconds 9000 / tcp, 0.0.0.0:9001->9001/ tcp, ::: 9001-> 9001 / tcp new_minio_1
d00dfeb27d8e ghcr.io/uzh-bf/klicker-uzh/backend:v1.6.1 "/ sbin / tini - node…" 3 days ago Up 9 seconds (health: starting) 4000 / tcp neu_backend_1
6f6c28a9a646 redis: 5.0.9 "docker-entrypoint.s…" 3 days ago Up 6 seconds 6379 / tcp neu_redis_exec_1
ba63467807ae mongo: 4.0 "docker-entrypoint.s…" 3 days ago Up 7 seconds 27017 / tcp neu_mongodb_1
9acf393c0c24 redis: 5.0.9 "docker-entrypoint.s…" 3 days ago Up 8 seconds 6379 / tcp neu_redis_cache_1
root @ jscklick: ~ / new #

How is the correct solution structured? In my test environment, Ubuntu runs on a system zam153.zam.kfa-juelich.de. On this system there are DNS aliases for every URL from docker-compose.yml: appklick.zam.kfa-juelich.de jscklick.zam.kfa-juelich.de s3klick.zam.kfa-juelich.de

docker-compose.yml:

version: '3.7'

services:
  # proxy for domain names with SSL termination
  # the example config for the KlickerUZH includes the following rules:
  # app.klicker.uzh.ch -> frontend on port 3000
  # api.klicker.uzh.ch -> backend on port 4000
  # s3.klicker.uzh.ch -> minio on port 9000
  # TODO: add an example nginx configuration with domain names and letsencrypt
  # proxy:
  #   image: nginx:1.21.3

  # the frontend for KlickerUZH
  frontend:
    restart: always
    image: ghcr.io/uzh-bf/klicker-uzh/frontend:v1.6.1
    # ports:
    #   - 3000:3000
    environment:
      API_ENDPOINT: https://jscklick.zam.kfa-juelich.de/graphql
      API_ENDPOINT_WS: wss://jscklick.zam.kfa-juelich.de/graphql
      APP_BASE_URL: https://jscklick.zam.kfa-juelich.de
      APP_JOIN_URL: jscklick.zam.kfa-juelich.de/join
      APP_TRUST_PROXY: 'true'
      APP_WITH_AAI: 'false'
      CACHE_REDIS_ENABLED: 'true'
      CACHE_REDIS_HOST: redis_cache
      CACHE_REDIS_TLS: 'false'
      S3_ROOT_URL: https://s3klick.zam.kfa-juelich.de/images
    networks:
      - klicker

  # the backend for KlickerUZH
  backend:
    restart: always
    image: ghcr.io/uzh-bf/klicker-uzh/backend:v1.6.1
    # ports:
    #   - 4000:4000
    environment:
      APP_BASE_URL: jscklick.zam.kfa-juelich.de
      APP_COOKIE_DOMAIN: zam.kfa-juelich.de
      APP_DOMAIN: appklick.zam.kfa-juelich.de
      APP_HTTPS: 'true'
      APP_SECURE: 'true'
      APP_SECRET: 'PASSWORD'
      APP_TRUST_PROXY: 'true'
      CACHE_REDIS_HOST: redis_cache
      CACHE_REDIS_PORT: 6379
      CACHE_REDIS_TLS: 'false'
      EXEC_REDIS_HOST: redis_exec
      EXEC_REDIS_PORT: 6379
      EXEC_REDIS_TLS: 'false'
      EMAIL_FROM: 'mail@fz-juelich.de'
      EMAIL_HOST: 'mail.fz-juelich.de'
      EMAIL_PORT: '25'
      EMAIL_USER:
      EMAIL_PASSWORD:
      MONGO_URL: mongodb:27017/klicker?authSource=admin
      MONGO_USER: klicker
      MONGO_PASSWORD: PASSWORD
      S3_ENABLED: 'true'
      S3_ACCESS_KEY: minioadmin
      S3_SECRET_KEY: minioadmin
      S3_ENDPOINT: https://s3klick.zam.kfa-juelich.de
      S3_BUCKET: images
      SECURITY_CORS_CREDENTIALS: 'true'
      SECURITY_CORS_ORIGIN: https://jscklick.zam.kfa-juelich.de
      SECURITY_HSTS_ENABLED: 'false'
      SECURITY_RATE_LIMIT_ENABLED: 'true'
    networks:
      - klicker

  # redis instance to support session execution
  # instance data must be persisted
  redis_exec:
    restart: always
    image: redis:5.0.9
    # ports:
    #   - 6379:6379
    volumes:
      - redis-data:/data
    networks:
      - klicker

  # redis instance for page caching and rate limiting
  # this instance does not require persistence
  redis_cache:
    restart: always
    image: redis:5.0.9
    # ports:
    #   - 6379:6379
    networks:
      - klicker

  # mongodb database
  # it is recommended to run this service outside of docker
  mongodb:
    restart: always
    image: mongo:4.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: klicker
      MONGO_INITDB_ROOT_PASSWORD: PASSWORD
    # ports:
    #   - 27017:27017
    networks:
      - klicker

  # minio storage platform for S3
  minio:
    restart: always
    image: quay.io/minio/minio:RELEASE.2021-09-15T04-54-25Z
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: PASSWORD
      MINIO_API_CORS_ALLOW_ORIGIN: https://jscklick.zam.kfa-juelich.de
    ports:
      # - 9000:9000
      - 9001:9001
    volumes:
      - minio-data:/data
    command: server /data --console-address ":9001"

volumes:
  redis-data:
  minio-data:

networks:
  klicker:

The software runs on the Ubuntu in docker-compose. For each service there is an SSL certificate, the forwarding for the individual services is set up in an Nginx on the local Ubuntu (I limited the output to the forwardings):

server {
    listen 443 ssl http2;
    server_name jscklick.zam.kfa-juelich.de;
location / {
        proxy_pass http://127.0.0.1:3000;
server {
    listen 443 ssl http2;
    server_name appklick.zam.kfa-juelich.de;
location / {
        proxy_pass http://127.0.0.1:4000;
server {
    listen 443 ssl http2;
    server_name s3klick.zam.kfa-juelich.de;
location / {
        proxy_pass http://127.0.0.1:9000;

The question arises to me: Are the locations right at all? How does the container know that it has to listen to the hostname/port? Do the IPs of the individual containers have to go in here?

Can you bring light into the dark? I'am confused... 🤣

Thank you and best regards from Juelich, Georg

rschlaefli commented 2 years ago

Hi @georgschilling,

Thanks for the additional info. That makes it clear.

Your ports are not published in the Docker Compose file (e.g., lines 4 and 5 in the excerpt below are commented) and thus not reachable from outside the Docker network (including from the host itself), so they are not available using localhost:3000 from the host (and same for the backend and 9000 for S3):

frontend:
  restart: always
  image: ghcr.io/uzh-bf/klicker-uzh/frontend:v1.6.1
  # ports:
  #   - 3000:3000

There are two approaches to work around this:

  1. Publish the ports by uncommenting these lines (for 3000, 4000, and 9000 respectively) and using the Proxy config you already have (and use a firewall so 3000, 4000, and 9000 are not reachable directly, only through the proxy). These ports are commented and thus not published by default in our Compose example, as our example is a first step for the second approach below.
  2. Move your Nginx proxy into the Docker network (e.g., I found this article to be a good example https://www.domysee.com/blogposts/reverse-proxy-nginx-docker-compose). This would allow you to remove the published ports and have the Docker names in your proxy_pass config (e.g., proxy_pass http://frontend:3000).
  3. There might be more complicated networking approaches that allow you to not have to publish the ports as in 1., but I would suggest one of the above for simplicity.

Hope this helps!

Best, Roland

georgschilling commented 2 years ago

@rschlaefli, I am very grateful to you. The solution was to comment out ports 3000, 4000 and 9000 in the compose file ... at least it was almost the whole solution ... ;-) The hostname of the api still had a wrong name in the Compose file and that, thanks to the Nginx, ended up on port 3000 instead of 4000. This led to a JSON error during registration. But that is now also fixed.

The firewall is set up and my colleagues can now take a look around.

One more question, where can I adapt the texts / emails on the "start page"? And where are the data protection regulations, etc.? Bildschirmfoto 2021-10-04 um 11 46 17

Thank you once again and sunny greetings from Juelich, Georg

rschlaefli commented 2 years ago

Great. I will adapt the Compose example based on our findings here.

Regarding customization: there has not been a lot of work in that regard, as you would probably be the first to put their own instance online (at least that I know of). I will have a look at customization options later this week (currently on holiday), and we best discuss your specific requirements (what you need to customize) again if you decide to go to production.

Regarding the TOC and Privacy Policy, we have links on our landing page (www.klicker.uzh.ch). We could add customizable links to the index page (the one you sent a screenshot of) or somewhere in the app as well. There are also transactional emails that are sent out on registering etc., so customization options might be a bit of work.

rschlaefli commented 2 years ago

Also @georgschilling

You might find it useful to open the MongoDB directly (E. g., using mongodb compass), and change the role of your initial user to "ADMIN". This will enable an admin view of all users in the instance, and all running sessions (these views are not fully developed yet but provide some initial functionality).

rschlaefli commented 2 years ago

Hey @georgschilling

I realized yesterday that redis_exec will also need to run with a specific command telling it to persist data. For the exec instance, it is crucial that data is persisted, as the results of sessions are stored there while it is running.

You can add the following under redis_exec in Docker Compose:

redis_exec:
  ...
  command: redis-server --save 60 1 --loglevel warning

Also, @agp8x has kindly provided us with a more extensive Docker Compose example that includes Traefik as a proxy and as a Docker container, which might be more appropriate when going to production from a policy/security point of view. Feel free to have a look at https://github.com/uzh-bf/klicker-uzh/pull/2550 and the new example in https://github.com/uzh-bf/klicker-uzh/tree/dev/deploy/compose-traefik-proxy (I will add a README there as well, currently the files are provided as-is).

Furthermore, the above PR also includes a fix that makes the MongoDB volume persistent (otherwise data would be lost on recreate). You want to add this as well when you go to production (if you want to add it to your test setup you might lose some data, so backup MongoDB first and restore after recreate).

mongodb:
  ...
  volumes:
    - mongo-data:/data/db

volumes:
  ...
  mongo-data:
georgschilling commented 2 years ago

@rschlaefli ,

Thank you for your commitment, your information and your support. I think the idea of using all this information to create a usable and explanatory installation guide is charming.

I would approach you again at the end of next week at the earliest. We can also write by email. Send me your contact details: g.schilling@fz-juelich.de

Cheers from Juelich, Georg Schilling