Closed ronna closed 3 years ago
Sharing a stack trace of the error log would help.
Hi @nomulex
Thanks for the response, find the log from a different computer, different install, this time CouchBD started but failed to created admin user:
osboxes@osboxes:~$ tail -f /var/log/medic/medic.log
/usr/local/lib/python2.7/dist-packages/cryptography/init.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
The DOCKER_COUCHDB_ADMIN_PASSWORD variable is not set. Defaulting to a blank string.
Creating network "medic-net" with the default driver
Creating volume "medic-data" with default driver
Pulling haproxy (medicmobile/haproxy:rc-1.16)...
rc-1.16: Pulling from medicmobile/haproxy
Digest: sha256:3c72791ad8e3515341ed7f3d4c8120809343e3fb44064aa7993d18094f1af637
Status: Downloaded newer image for medicmobile/haproxy:rc-1.16
Pulling medic-os (medicmobile/medic-os:cht-3.9.0-rc.1)...
cht-3.9.0-rc.1: Pulling from medicmobile/medic-os
Digest: sha256:4be05fba5164f6157055e1ab9b431e48e038f51b7e6ec198a3a68f3f2f16b397
Status: Downloaded newer image for medicmobile/medic-os:cht-3.9.0-rc.1
Creating haproxy ... done
Creating medic-os ... done
Attaching to haproxy, medic-os
haproxy | Starting enhanced syslogd: rsyslogd.
haproxy | # Setting log
here with the address of 127.0.0.1 will have the effect
haproxy | # of haproxy sending the udp log messages to its own rsyslog instance
haproxy | # (which sits at 127.0.0.1
) at the local0
facility including all
haproxy | # logs that have a priority greater or equal to the specified log level
haproxy | # log 127.0.0.1 local0 warning
haproxy | global
haproxy | maxconn 4096
haproxy | lua-load /usr/local/etc/haproxy/parse_basic.lua
haproxy | lua-load /usr/local/etc/haproxy/parse_cookie.lua
haproxy | log /dev/log len 65535 local2 info
haproxy |
haproxy | defaults
haproxy | mode http
haproxy | log global
haproxy | option dontlognull
haproxy | option http-ignore-probes
haproxy | timeout client 150000
haproxy | timeout server 3600000
haproxy | timeout connect 15000
haproxy | stats enable
haproxy | stats refresh 30s
haproxy | stats auth admin:
haproxy | stats uri /haproxy?stats
haproxy |
haproxy | frontend http-in
haproxy | bind :5984
haproxy | acl has_user req.hdr(x-medic-user) -m found
haproxy | acl has_cookie req.hdr(cookie) -m found
haproxy | acl has_basic_auth req.hdr(authorization) -m found
haproxy | declare capture request len 400000
haproxy | http-request set-header x-medic-user %[lua.parseBasic] if has_basic_auth
haproxy | http-request set-header x-medic-user %[lua.parseCookie] if !has_basic_auth !has_user has_cookie
haproxy | http-request capture req.body id 0 # capture.req.hdr(0)
haproxy | http-request capture req.hdr(x-medic-service) len 200 # capture.req.hdr(1)
haproxy | http-request capture req.hdr(x-medic-user) len 200 # capture.req.hdr(2)
haproxy | http-request capture req.hdr(user-agent) len 600 # capture.req.hdr(3)
haproxy | capture response header Content-Length len 10 # capture.res.hdr(0)
haproxy | log-format "%ci,%ST,%[capture.req.method],%[capture.req.uri],%[capture.req.hdr(1)],%[capture.req.hdr(2)],'%[capture.req.hdr(0)]',%B,%Tr,%[capture.res.hdr(0)],'%[capture.req.hdr(3)]'"
haproxy | default_backend couch-backend
haproxy |
haproxy | frontend http-in2
haproxy | bind :5986
haproxy | default_backend couch-backend2
haproxy |
haproxy | backend couch-backend
haproxy | balance roundrobin
haproxy | server couchdb1 medic-os:5985
haproxy |
haproxy | backend couch-backend2
haproxy | balance roundrobin
haproxy | server couchdb1 medic-os:5987
haproxy | [alert] 207/092444 (1) : parseBasic loaded
haproxy | [alert] 207/092444 (1) : parseCookie loaded
haproxy | Jul 26 09:24:44 33701e86c8ba haproxy[1]: Proxy http-in started.
haproxy | Jul 26 09:24:44 33701e86c8ba haproxy[1]: Proxy http-in2 started.
haproxy | Jul 26 09:24:44 33701e86c8ba haproxy[1]: Proxy couch-backend started.
haproxy | Jul 26 09:24:44 33701e86c8ba haproxy[1]: Proxy couch-backend2 started.
medic-os | mesg: ttyname failed: Inappropriate ioctl for device
medic-os | [2020/07/26 09:24:45] Info: Setting up software...
medic-os | [2020/07/26 09:24:46] Info: Running setup task 'horticulturalist/sudoers'
medic-os | [2020/07/26 09:24:46] Info: Running setup task 'horticulturalist/users'
medic-os | [2020/07/26 09:24:47] Info: Service 'horticulturalist/horticulturalist' started successfully
medic-os | [2020/07/26 09:24:47] Info: Setting up software (14% complete)...
medic-os | [2020/07/26 09:24:47] Info: Running setup task 'medic-api/link-logs'
medic-os | [2020/07/26 09:24:47] Info: Running setup task 'medic-api/logrotate'
medic-os | [2020/07/26 09:24:47] Info: Running setup task 'medic-api/users'
medic-os | [2020/07/26 09:24:47] Info: Service 'medic-api/medic-api' started successfully
medic-os | [2020/07/26 09:24:47] Info: Setting up software (28% complete)...
medic-os | [2020/07/26 09:25:13] Info: Running setup task 'medic-core/ldconfig'
medic-os | [2020/07/26 09:25:13] Info: Running setup task 'medic-core/link-logs'
medic-os | [2020/07/26 09:25:13] Info: Running setup task 'medic-core/logrotate'
medic-os | [2020/07/26 09:25:14] Info: Running setup task 'medic-core/nginx'
medic-os | [2020/07/26 09:25:14] Info: Running setup task 'medic-core/nginx-ssl'
medic-os | [2020/07/26 09:25:19] Info: Running setup task 'medic-core/profile'
medic-os | [2020/07/26 09:25:19] Info: Running setup task 'medic-core/ssh-authorized-keys'
medic-os | [2020/07/26 09:25:19] Info: Running setup task 'medic-core/ssh-keygen'
medic-os | [2020/07/26 09:25:20] Info: Running setup task 'medic-core/usb-modeswitch'
medic-os | [2020/07/26 09:25:21] Info: Service 'medic-core/couchdb' started successfully
medic-os | [2020/07/26 09:25:21] Info: Service 'medic-core/nginx' started successfully
medic-os | [2020/07/26 09:25:21] Info: Setting up CouchDB for the first time
medic-os | [2020/07/26 09:25:21] Info: Service 'medic-core/openssh' started successfully
haproxy | Jul 26 09:25:24 33701e86c8ba haproxy[24]: 172.18.0.3,503,GET,/,-,-,'-',212,-1,-,'curl/7.47.0'
medic-os | [2020/07/26 09:25:24] Info: Creating system databases
medic-os | [2020/07/26 09:25:25] Info: Setting up software (42% complete)...
haproxy | Jul 26 09:25:27 33701e86c8ba haproxy[24]: 172.18.0.3,503,PUT,/_users,-,-,'-',212,-1,-,'curl/7.47.0'
medic-os | [2020/07/26 09:25:28] Warning: Failed to created system database '_users'
medic-os | [2020/07/26 09:25:28] Info: Setting up CouchDB administrative account
haproxy | Jul 26 09:25:28 33701e86c8ba haproxy[24]: Connect from 172.18.0.3:38448 to 172.18.0.2:5986 (http-in2/HTTP)
haproxy | Jul 26 09:25:30 33701e86c8ba haproxy[24]: 172.18.0.3,404,PUT,/_users/org.couchdb.user:admin,-,admin,'{#012 "id": "org.couchdb.user:admin", "roles": [],#012 "type": "user", "name": "admin"#012 }',326,100,58,'curl/7.47.0'
medic-os | Fatal: Failed to create initial CouchDB administrative account
medic-os | [2020/07/26 09:25:31] Info: Running setup task 'medic-couch2pg/link-logs'
medic-os | [2020/07/26 09:25:31] Info: Running setup task 'medic-couch2pg/logrotate'
medic-os | [2020/07/26 09:25:31] Info: Service 'medic-couch2pg/medic-couch2pg' started successfully
medic-os | [2020/07/26 09:25:32] Info: Setting up software (57% complete)...
medic-os | [2020/07/26 09:25:49] Info: Running setup task 'medic-rdbms/ldconfig'
medic-os | [2020/07/26 09:25:49] Info: Running setup task 'medic-rdbms/link-logs'
medic-os | [2020/07/26 09:25:50] Info: Running setup task 'medic-rdbms/reconfigure'
medic-os | [2020/07/26 09:25:50] Info: Service 'medic-rdbms/postgresql' started successfully
medic-os | [2020/07/26 09:25:50] Info: Setting up software (71% complete)...
medic-os | [2020/07/26 09:25:51] Info: Running setup task 'medic-sentinel/link-logs'
medic-os | [2020/07/26 09:25:51] Info: Running setup task 'medic-sentinel/logrotate'
medic-os | [2020/07/26 09:25:51] Info: Running setup task 'medic-sentinel/users'
medic-os | [2020/07/26 09:25:51] Info: Service 'medic-sentinel/medic-sentinel' started successfully
medic-os | [2020/07/26 09:25:51] Info: Setting up software (85% complete)...
medic-os | [2020/07/26 09:25:51] Info: Running setup task 'system-services/home-directories'
medic-os | [2020/07/26 09:25:51] Info: Running setup task 'system-services/link-logs'
medic-os | [2020/07/26 09:25:52] Info: Running setup task 'system-services/logrotate'
medic-os | [2020/07/26 09:25:52] Info: Service 'system-services/cron' started successfully
medic-os | [2020/07/26 09:25:52] Info: Service 'system-services/syslog' started successfully
medic-os | [2020/07/26 09:25:52] Info: Setting up software (100% complete)...
medic-os | [2020/07/26 09:25:52] Info: Starting services...
medic-os | [2020/07/26 09:25:53] Info: Synchronizing disks...
medic-os | [2020/07/26 09:26:04] Info: System started successfully
medic-os | [2020/07/26 09:26:04] Info: Starting log streaming
Looks like you have shared the container stdout log. What You need is the CouchDB log. Get into the medic-os container and find it at /srv/storage/medic-core/couchdb/logs/startup.log
Also kindly shift this question to the forum, it will benefit other people facing the same challenges as you and also you will get prompt support.
For the issue you shared. You need to specify the couchdb admin password as an environmental variable for couchdb to use when creating the user for the first time
For the issue you shared. You need to specify the couchdb admin password as an environmental variable for couchdb to use when creating the user for the first time
I am new to Docker and CHT (assisting a rural hospital to get started). Is the password not been specified by the script? Lines 25-28? I am moving the conversation to the forum as per your request
/srv/storage/medic-core/couchdb/logs/startup.log
requested log: root@cf3c91e7cff4:/srv/storage/medic-core# tail -f /srv/storage/medic-core/couchdb/logs/startup.log
[notice] 2020-07-26T10:18:52.333172Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:18:57.335931Z couchdb@127.0.0.1 emulator -------- Error in process <0.19943.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:18:57.336896Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:02.338506Z couchdb@127.0.0.1 emulator -------- Error in process <0.20032.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:02.338582Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:07.340879Z couchdb@127.0.0.1 emulator -------- Error in process <0.20105.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:07.341935Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:12.344580Z couchdb@127.0.0.1 emulator -------- Error in process <0.20194.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:12.344655Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:17.346750Z couchdb@127.0.0.1 emulator -------- Error in process <0.20267.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:17.347222Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:22.351323Z couchdb@127.0.0.1 emulator -------- Error in process <0.20356.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:22.351664Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:27.353538Z couchdb@127.0.0.1 emulator -------- Error in process <0.20430.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:27.353989Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:32.356657Z couchdb@127.0.0.1 emulator -------- Error in process <0.20519.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,399}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,86}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,195}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,142}]}]}
[notice] 2020-07-26T10:19:32.356774Z couchdb@127.0.0.1 <0.350.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:395) <= mem3_shards:load_shards_from_disk/1(line:370) <= mem3_shards:load_shards_from_disk/2(line:399) <= mem3_shards:for_docid/3(line:86) <= fabric_doc_open:go/3(line:39) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:195) <= chttpd_auth_cache:listen_for_changes/1(line:142) [error] 2020-07-26T10:19:37.359663Z couchdb@127.0.0.1 emulator -------- Error in process <0.20592.1> on node 'couchdb@127.0.0.1' with exit value: {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shard
@nomulex
CouchDB is not starting with the updated medic-os docker image.