docker-library / mongo

Docker Official Image packaging for MongoDB
https://www.mongodb.org/
Apache License 2.0
1.03k stars 620 forks source link

Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error #323

Closed leon0707 closed 2 years ago

leon0707 commented 5 years ago

Dockerfile

FROM mongo
ADD ./mongo-init.sh /docker-entrypoint-initdb.d/
mongodb_1  | about to fork child process, waiting until server is ready for connections.
mongodb_1  | forked process: 29
mongodb_1  | 2019-01-15T21:20:47.367+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
mongodb_1  | 2019-01-15T21:20:47.370+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] MongoDB starting : pid=29 port=27017 dbpath=/data/db 64-bit host=343629fa23cd
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] db version v4.0.5
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] git version: 3739429dd92b92d1b0ab120911a23d50bf03c412
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] modules: none
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] build environment:
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1  | 2019-01-15T21:20:47.374+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27017, ssl: { mode: "disabled" } }, processManagement: { fork: true, pidFilePath: "/tmp/docker-entrypoint-temp-mongod.pid" }, systemLog: { destination: "file", logAppend: true, path: "/proc/1/fd/1" } }
mongodb_1  | 2019-01-15T21:20:47.374+0000 I STORAGE  [initandlisten]
mongodb_1  | 2019-01-15T21:20:47.374+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongodb_1  | 2019-01-15T21:20:47.374+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
mongodb_1  | 2019-01-15T21:20:47.375+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongodb_1  | 2019-01-15T21:20:47.866+0000 I STORAGE  [initandlisten] WiredTiger message [1547587247:866155][29:0x7eff7bb21a40], txn-recover: Set global recovery timestamp: 0
mongodb_1  | 2019-01-15T21:20:47.873+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
mongodb_1  | 2019-01-15T21:20:47.883+0000 I CONTROL  [initandlisten]
mongodb_1  | 2019-01-15T21:20:47.883+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
mongodb_1  | 2019-01-15T21:20:47.883+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
mongodb_1  | 2019-01-15T21:20:47.883+0000 I CONTROL  [initandlisten]
mongodb_1  | 2019-01-15T21:20:47.883+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 6019a1c5-fc8c-45ed-9355-5677a9ed6c47
mongodb_1  | 2019-01-15T21:20:47.892+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0
mongodb_1  | 2019-01-15T21:20:47.897+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: bc5c6df0-c572-4cbe-815c-daebc8c7f596
mongodb_1  | 2019-01-15T21:20:47.909+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1  | 2019-01-15T21:20:47.911+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1  | 2019-01-15T21:20:47.911+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: b7cba875-b5b1-4633-9d4e-944a52b3c84a
mongodb_1  | child process started successfully, parent exiting
mongodb_1  | 2019-01-15T21:20:47.925+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
mongodb_1  | 2019-01-15T21:20:47.925+0000 I INDEX    [LogicalSessionCacheRefresh]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1  | 2019-01-15T21:20:47.926+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
mongodb_1  | 2019-01-15T21:20:47.969+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:36626 #1 (1 connection now open)
mongodb_1  | 2019-01-15T21:20:47.969+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:36626 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongodb_1  | 2019-01-15T21:20:47.972+0000 I NETWORK  [conn1] end connection 127.0.0.1:36626 (0 connections now open)
mongodb_1  | 2019-01-15T21:20:48.032+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:36628 #2 (1 connection now open)
mongodb_1  | 2019-01-15T21:20:48.032+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:36628 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongodb_1  | 2019-01-15T21:20:48.065+0000 I STORAGE  [conn2] createCollection: admin.system.users with generated UUID: 0ec16545-de93-480f-ad22-184f838e30b6
mongodb_1  | Successfully added user: {
mongodb_1  |    "user" : "root",
mongodb_1  |    "roles" : [
mongodb_1  |        {
mongodb_1  |            "role" : "root",
mongodb_1  |            "db" : "admin"
mongodb_1  |        }
mongodb_1  |    ]
mongodb_1  | }
mongodb_1  | 2019-01-15T21:20:48.084+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error
wglambert commented 5 years ago

Looks like a bug that was fixed in 3.5.10, and you're encountering it on 4.0.5 https://jira.mongodb.org/browse/SERVER-26871 https://jira.mongodb.org/browse/SERVER-32473

Starting a mongo:4.0.5 container doesn't show any errors for me

$ docker run -d --rm --name mongo mongo:4.0.5
d7b0f2308d4be6acdb874a0b11dd2abb96608857e296b3397209963bd70f7388

$ docker logs mongo
2019-01-16T17:53:09.971+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'                 
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=d7b0f2308d4b                          
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] db version v4.0.5                                                                                     
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] git version: 3739429dd92b92d1b0ab120911a23d50bf03c412                                                 
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016                                                           
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] allocator: tcmalloc                                                                                   
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] modules: none                                                                                         
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] build environment:                                                                                    
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604                                                                               
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten]     distarch: x86_64                                                                                  
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten]     target_arch: x86_64                                                                               
2019-01-16T17:53:09.973+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true } }                                                                 
2019-01-16T17:53:09.973+0000 I STORAGE  [initandlisten]                                                                                                       
2019-01-16T17:53:09.973+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine       
2019-01-16T17:53:09.973+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem                                   
2019-01-16T17:53:09.973+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=979M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),                                                                                                                             
2019-01-16T17:53:10.598+0000 I STORAGE  [initandlisten] WiredTiger message [1547661190:598541][1:0x7f40fc54da40], txn-recover: Set global recovery timestamp: 0                                                                                                                                                             
2019-01-16T17:53:10.721+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)                                                     
2019-01-16T17:53:10.799+0000 I CONTROL  [initandlisten]                                                                                                       
2019-01-16T17:53:10.799+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.                                           
2019-01-16T17:53:10.799+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.                          
2019-01-16T17:53:10.799+0000 I CONTROL  [initandlisten]                                                                                                       
2019-01-16T17:53:10.800+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 52806025-e5ad-4578-8216-1de9d92f2248       
2019-01-16T17:53:10.904+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0                                                            
2019-01-16T17:53:10.914+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: beb27e97-eee9-44ac-86f0-379cce977e11         
2019-01-16T17:53:10.955+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'              
2019-01-16T17:53:10.958+0000 I NETWORK  [initandlisten] waiting for connections on port 27017                                                                 
2019-01-16T17:53:10.959+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: 34c27039-408e-4147-98b0-1cf7487db169                                                                                                                                                     
2019-01-16T17:53:11.030+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }                                                                                         
2019-01-16T17:53:11.030+0000 I INDEX    [LogicalSessionCacheRefresh]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM                                                                                                                                                             
2019-01-16T17:53:11.034+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
leon0707 commented 5 years ago

@wglambert I create two secrets for the mongo containerMONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD. The container automatically creates the root user. The error message showed up between adding the root user and execution of the custom init script.

wglambert commented 5 years ago

So the error is shown here when mongo does something like create a user and then wants to write its history to an invalid path.

tianon commented 5 years ago

Ouch, looks like https://jira.mongodb.org/browse/SERVER-29103 is related -- ideally we'd want to disable the "history file" for these commands, but there's not actually a way to do that without making the history file completely unusable by default. :disappointed:

prokhorovn commented 5 years ago

Dealing with exactly the same error. Mongo image v 4.0.6. First "clean" initial run on container with variables MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_USERNAME successfully creates admin user. But after stopping container and starting it again error comes up:

2019-03-19T08:21:18.911+0000 E QUERY    [js] Error: couldn't add user: command createUser requires authentication :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1491:15
@(shell):1:1
2019-03-19T08:21:18.912+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error

Manually hacking the container and removing env. varibles MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_USERNAME from it resolves the issue. So, it seems, like mongo image tries to create user from MONGO_INITDB_ROOT_USERNAME even if DB was already initialized. This is very confusing, especially considering the docs on dockerhub which say:

Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.

How to set up first super user in a proper way, so that not to break cluster after stop?

UPD: issue happens when running mongod as configsvr instance;

lu911 commented 5 years ago

+1

grammy-jiang commented 5 years ago

Hi, all,

I have met this problem today. I used the docker image tag: 4.0.9.

Problem

I used docker exec -it container-mongo /bin/bash to go inside, and find:

There is no home folder for the user of mongodb.

Yes, in the error message:

2019-03-19T08:21:18.912+0000 E - [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error

The folder /home/mongodb does not exist.

Solution

Then my solution is:

  1. Create a folder manually on your host, which will be mounted as a volume
  2. Assign mongodb as the owner for this folder
  3. Mount this folder in the docker-compose.yml as the home folder \home\mongodb

Remind: don't use user and group name in your host for the owner assign - instead use the id of user and group.

You can get mongodb user and group id used in the docker image:

root@4778d894024d:/# id mongodb uid=999(mongodb) gid=999(mongodb) groups=999(mongodb)

After these steps, the error message is gone, and .dbshell shows up in the folder which owner is 999:999.

One line command:

mkdir mongo-home && \
sudo chown `docker run --rm mongo:latest id -u mongodb`:`docker run --rm mongo:latest id -g mongodb` mongo-home

Why

I check the dockerfile, and find the creation of the user mongodb is missing -m arugment:

root@4778d894024d:/# docker run --rm mongo useradd --help Usage: useradd [options] LOGIN useradd -D useradd -D [options]

Options: ... -m, --create-home create the user's home directory ...

sadokmtir commented 4 years ago

+1

guillaumelachaud commented 4 years ago

Is everybody using the manual folder workaround ? Should we proceed with a PR to add the -m option to the dockerfile ?

tianon commented 4 years ago

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning -- is someone else seeing behavior different from that?

aminnairi commented 4 years ago

As of today, I still have this issue with the latest image of mongo.

For people using Docker Compose, here is what I did.

1. Create the folder ./home/mongodb on your local filesystem.

$ mkdir -p ./home/mongodb

2. Create the file ./home/mongodb/.dbshell

$ touch ./home/mongodb/.dbshell

3. Change the permission of the folder to match the one used in the Dockerfile.

As said above, the permissions are set to 999

$ chown -R 999:999 ./home/mongodb

4. Add a volume to your Docker Compose service (mine is called barret for context).

version: "3"

services:
    barret:
        container_name: barret
        image: mongo:latest
        ports:
            - $MONGO_PORT:27017
        environment:
            MONGO_DATABASE_USERNAME: $MONGO_DATABASE_USERNAME
            MONGO_DATABASE_PASSWORD: $MONGO_DATABASE_PASSWORD
            MONGO_DATABASE_NAME: $MONGO_DATABASE_NAME
            MONGO_INITDB_ROOT_USERNAME: $MONGO_ROOT_USERNAME
            MONGO_INITDB_ROOT_PASSWORD: $MONGO_ROOT_PASSWORD
        volumes:
            - ./home/mongodb:/home/mongodb
            - ./database/migrations:/docker-entrypoint-initdb.d
            - ./data/db:/data/db
jhnieman commented 4 years ago

I just updated Docker Desktop (Windows) from 2.2.0.4 to 2.2.0.5 and this problem went away 🤷‍♂️

jhnieman commented 4 years ago

Ha fair question. Comment updated! I updated Docker Desktop (Windows) from 2.2.0.4 to 2.2.0.5.

On Sun, May 3, 2020 at 1:33 AM Daniel Shmuglin notifications@github.com wrote:

@jhnieman https://github.com/jhnieman may I ask what did you update? what are these magic numbers?

10x

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker-library/mongo/issues/323#issuecomment-623074747, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSPWH7YS333B7LI6T6IXILRPUT5BANCNFSM4GQHRRQA .

andreibastos commented 4 years ago

As of today, I still have this issue with the latest image of mongo.

For people using Docker Compose, here is what I did.

1. Create the folder ./home/mongodb on your local filesystem.

$ mkdir -p ./home/mongodb

2. Create the file ./home/mongodb/.dbshell

$ touch ./home/mongodb/.dbshell

3. Change the permission of the folder to match the one used in the Dockerfile.

As said above, the permissions are set to 999

$ chown -R 999:999 ./home/mongodb

4. Add a volume to your Docker Compose service (mine is called barret for context).

version: "3"

services:
    barret:
        container_name: barret
        image: mongo:latest
        ports:
            - $MONGO_PORT:27017
        environment:
            MONGO_DATABASE_USERNAME: $MONGO_DATABASE_USERNAME
            MONGO_DATABASE_PASSWORD: $MONGO_DATABASE_PASSWORD
            MONGO_DATABASE_NAME: $MONGO_DATABASE_NAME
            MONGO_INITDB_ROOT_USERNAME: $MONGO_ROOT_USERNAME
            MONGO_INITDB_ROOT_PASSWORD: $MONGO_ROOT_PASSWORD
        volumes:
            - ./home/mongodb:/home/mongodb
            - ./database/migrations:/docker-entrypoint-initdb.d
            - ./data/db:/data/db

thanks, works! but don't need to create this path /home/mongodb, can be other, likes ./data. and chown -R $USER ./data

daviddetorres commented 4 years ago

Hi,

I found the same problem while passing the arguments for MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD as environment variables in a kubernetes deployment.

Is there a reason why there is not a PR with the solution proposed here (https://github.com/docker-library/mongo/issues/323#issuecomment-494648458)? Should we create it?

tianon commented 4 years ago

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning -- is someone else seeing behavior different from that?

daviddetorres commented 4 years ago

Correct... Sorry.

It seems I was having another problem related with the base64 of the secret inserting a \n at the end of the pasword (https://github.com/docker-library/mongo/issues/346).

bouthouri commented 4 years ago

I tried all the above and nothing worked for me. I switched my init file from sh to js and everything is fine now. The problem now is that I have to figure out how to use env variables in the js file.

yosifkit commented 4 years ago

I tried all the above and nothing worked for me. I switched my init file from sh to js and everything is fine now.

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning

It would be helpful to have a reproducer where the Error saving history file is the cause of the container exiting early.

StardustXiaoT commented 3 years ago

Any progress on this?

- MONGO_INITDB_DATABASE=dbname

- MONGO_INITDB_ROOT_USERNAME=username

- MONGO_INITDB_ROOT_PASSWORD=password

It works on the first initialization. But once mongo is restarted or killed or redeployed, it will try recreate the user which causes error and mongodb fails to start afterwards.

tianon commented 3 years ago

@StardustXiaoT I'm having a hard time seeing how what you describe is related to the error message/warning being discussed here, which is Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error?

It sounds like you're having some usage issues which would be better raised in the Docker Community Forums, the Docker Community Slack, or Stack Overflow.

gabbersepp commented 3 years ago

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning -- is someone else seeing behavior different from that?

For me this error stopped mongo from creating the root user specified by the env variables. The mentioned workaround (mapping /home/mongodb) worked and after cleaning the whole mongo directory and doing a fresh initialize everything works as expected!

tianon commented 3 years ago

Given your experience, I was hoping that maybe having a read-only /home/mongodb would do the trick and give us a reproducer, but no such luck:

Full Log: ```console $ docker run -it --rm --read-only --tmpfs /tmp --env MONGO_INITDB_ROOT_USERNAME=mongoadmin --env MONGO_INITDB_ROOT_PASSWORD=secret mongo --quiet about to fork child process, waiting until server is ready for connections. forked process: 26 {"t":{"$date":"2021-03-03T00:32:03.198+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"} {"t":{"$date":"2021-03-03T00:32:03.199+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":26,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"4a127ec477d1"}} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.4","gitVersion":"8db30a63db1a9d84bdcad0c83369623f708e0397","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"127.0.0.1","port":27017,"tls":{"mode":"disabled"}},"processManagement":{"fork":true,"pidFilePath":"/tmp/docker-entrypoint-temp-mongod.pid"},"systemLog":{"destination":"file","logAppend":true,"path":"/proc/1/fd/1","quiet":true}}}} {"t":{"$date":"2021-03-03T00:32:03.202+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]} {"t":{"$date":"2021-03-03T00:32:03.203+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=31585M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-03-03T00:32:03.582+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731523:582074][26:0x7f534b762ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-03-03T00:32:03.582+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731523:582153][26:0x7f534b762ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-03-03T00:32:03.591+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":388}} {"t":{"$date":"2021-03-03T00:32:03.591+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.605+00:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}} {"t":{"$date":"2021-03-03T00:32:03.605+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-03-03T00:32:03.610+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-03-03T00:32:03.610+00:00"},"s":"W", "c":"CONTROL", "id":22178, "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]} {"t":{"$date":"2021-03-03T00:32:03.611+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"0864d194-4382-497f-9403-7cc726f90c96"}},"options":{"uuid":{"$uuid":"0864d194-4382-497f-9403-7cc726f90c96"}}}} {"t":{"$date":"2021-03-03T00:32:03.622+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.version","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.622+00:00"},"s":"I", "c":"COMMAND", "id":20459, "ctx":"initandlisten","msg":"Setting featureCompatibilityVersion","attr":{"newVersion":"4.4"}} {"t":{"$date":"2021-03-03T00:32:03.622+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-03-03T00:32:03.623+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"9b5e00d3-d9f7-4e39-a9c3-a49dc715cb1d"}},"options":{"capped":true,"size":10485760}}} {"t":{"$date":"2021-03-03T00:32:03.646+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.646+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}} {"t":{"$date":"2021-03-03T00:32:03.652+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}} {"t":{"$date":"2021-03-03T00:32:03.652+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"LogicalSessionCacheRefresh","msg":"createCollection","attr":{"namespace":"config.system.sessions","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"e3302a0a-8f44-4a9e-b7ef-5015c318b15d"}},"options":{}}} {"t":{"$date":"2021-03-03T00:32:03.652+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}} {"t":{"$date":"2021-03-03T00:32:03.652+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-03-03T00:32:03.652+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} child process started successfully, parent exiting {"t":{"$date":"2021-03-03T00:32:03.687+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.687+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"lsidTTLIndex","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.737+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:38474","client":"conn1","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.4"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}} {"t":{"$date":"2021-03-03T00:32:03.786+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:38476","client":"conn2","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.4"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}} {"t":{"$date":"2021-03-03T00:32:03.816+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"conn2","msg":"createCollection","attr":{"namespace":"admin.system.users","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"8f7c15e1-5a97-48db-8e55-e52ef6b6aa12"}},"options":{}}} {"t":{"$date":"2021-03-03T00:32:03.831+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"conn2","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.users","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:03.831+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"conn2","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.users","index":"user_1_db_1","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}} Successfully added user: { "user" : "mongoadmin", "roles" : [ { "role" : "root", "db" : "admin" } ] } Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* {"t":{"$date":"2021-03-03T00:32:03.846+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"} {"t":{"$date":"2021-03-03T00:32:03.848+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} killing process with pid: 26 {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":23377, "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":23378, "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":80,"uid":999}} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":23381, "ctx":"SignalHandler","msg":"will terminate after current cmd ends"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"NETWORK", "id":23017, "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"-", "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"-", "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":3}} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"COMMAND", "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"INDEX", "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"REPL", "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"SignalHandler","msg":"Shutting down free monitoring"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"CONTROL", "id":20609, "ctx":"SignalHandler","msg":"Shutting down free monitoring"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"} {"t":{"$date":"2021-03-03T00:32:03.849+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"} {"t":{"$date":"2021-03-03T00:32:03.850+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":24}} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"} {"t":{"$date":"2021-03-03T00:32:03.874+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}} MongoDB init process complete; ready for start up. {"t":{"$date":"2021-03-03T00:32:04.919+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"4a127ec477d1"}} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.4","gitVersion":"8db30a63db1a9d84bdcad0c83369623f708e0397","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}} {"t":{"$date":"2021-03-03T00:32:04.921+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"},"systemLog":{"quiet":true}}}} {"t":{"$date":"2021-03-03T00:32:04.922+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-03-03T00:32:04.922+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]} {"t":{"$date":"2021-03-03T00:32:04.922+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=31585M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-03-03T00:32:05.298+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:298152][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}} {"t":{"$date":"2021-03-03T00:32:05.335+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:335693][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}} {"t":{"$date":"2021-03-03T00:32:05.373+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:373490][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/29952 to 2/256"}} {"t":{"$date":"2021-03-03T00:32:05.415+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:415871][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}} {"t":{"$date":"2021-03-03T00:32:05.442+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:442619][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}} {"t":{"$date":"2021-03-03T00:32:05.462+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:462911][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-03-03T00:32:05.462+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1614731525:462941][1:0x7fc91ca31ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-03-03T00:32:05.489+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":567}} {"t":{"$date":"2021-03-03T00:32:05.489+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-03-03T00:32:05.490+00:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}} {"t":{"$date":"2021-03-03T00:32:05.490+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-03-03T00:32:05.494+00:00"},"s":"W", "c":"CONTROL", "id":22178, "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]} {"t":{"$date":"2021-03-03T00:32:05.495+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-03-03T00:32:05.495+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}} {"t":{"$date":"2021-03-03T00:32:05.496+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}} {"t":{"$date":"2021-03-03T00:32:05.496+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}} {"t":{"$date":"2021-03-03T00:32:05.496+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} ```

(So in short, we're still looking for a reliable reproducer.)

tianon commented 3 years ago

Adding --mount type=volume,src=nonsense,dst=/home/mongodb,ro makes the error message slightly different (Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: Read-only file system), but the whole run still succeeds and MongoDB starts correctly.

du-do commented 3 years ago

hi all: I get the same problem while using mongodb 3.6.15 for docker。 The container gets stuck after running to this step.

2021-05-18T03:35:10.439+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: No such file or directory
2021-05-18T03:35:10.442+0000 I NETWORK  [conn2] end connection 127.0.0.1:47240 (0 connections now open)
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
2021-05-18T03:35:10.456+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
killing process with pid: 458
2021-05-18T03:35:10.468+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2021-05-18T03:35:10.468+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2021-05-18T03:35:10.468+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2021-05-18T03:35:10.469+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2021-05-18T03:35:10.469+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2021-05-18T03:35:10.569+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2021-05-18T03:35:10.569+0000 I CONTROL  [signalProcessingThread] now exiting
2021-05-18T03:35:10.569+0000 I CONTROL  [signalProcessingThread] shutting down with code:0

I debuged docker-entrypoint.sh found it was stuck in "${mongodHackedArgs[@]}" --shutdown . The full command is mongod --config /tmp/docker-entrypoint-temp-config.json --bind_ip 127.0.0.1 --port 27017 --sslMode disabled --logpath /proc/6/fd/1 --logappend --pidfilepath /tmp/docker-entrypoint-temp-mongod.pid --shutdown . I hacking the container and run ps aux find this stuck process and kill it, then the container run successfully! I do not konw why it stuck in command ${mongodHackedArgs[@]}" --shutdown

romintomasetti commented 3 years ago

Hi,

I found the same problem while passing the arguments for MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD as environment variables in a kubernetes deployment.

Is there a reason why there is not a PR with the solution proposed here (#323 (comment))? Should we create it?

I also wonder why it is not implemented... I'm building my own docker image based on the same Dockerfile, just because I need to add the -m ...

tianon commented 3 years ago

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning -- is someone else seeing behavior different from that?

quaos commented 2 years ago

From what I've seen, the error doesn't prevent anything from working, so it's really more of a warning -- is someone else seeing behavior different from that?

For me this error stopped mongo from creating the root user specified by the env variables. The mentioned workaround (mapping /home/mongodb) worked and after cleaning the whole mongo directory and doing a fresh initialize everything works as expected!

Yeah, me too. I'm just trying to set up MERN stack in Docker compose.😟 But cannot make MONGO_INITDB_DATABASE work. Tried mounting Read-Only volume but still not working.

yosifkit commented 2 years ago

@quaos, The database will not be created unless you insert data via *.js scripts in /docker-entrypoint-initdb.d/. If nothing is inserted the database does not exist (or conversely every database exists, you just have to use it).

This variable allows you to specify the name of a database to be used for creation scripts in /docker-entrypoint-initdb.d/*.js (see Initializing a fresh instance below). MongoDB is fundamentally designed for "create on first use", so if you do not insert data with your JavaScript files, then no database is created.

- https://github.com/docker-library/docs/blob/4e0045b3d80ffe93b9a99025c602f3fc4d8048d2/mongo/README.md#mongo_initdb_database

tianon commented 2 years ago

Fixed via https://github.com/docker-library/mongo/pull/541 :+1: