linuxserver / docker-unifi-network-application

GNU General Public License v3.0
723 stars 54 forks source link

[BUG] Defined MONGO_HOST is not reachable, cannot proceed. #42

Closed myazaki closed 11 months ago

myazaki commented 1 year ago

Is there an existing issue for this?

Current Behavior

Hello, I am trying to migrate from Controller which will be EOL in Jan 2024 to this new version and I am having following error *** Defined MONGO_HOST unifi-db is not reachable, cannot proceed. *** in the unifi-app container log. unifi-db is running just fine no errors in the log -> user, DBs etc created just fine. Web UI on https://ip:8443 gives "This site can’t be reached" ERR_CONNECTION_CLOSED.

docker-compose.yml and init-mongo.js files below. I browsed all the issues back and forth and could not get into any working config for me. Any ideas what I am doing wrong?

docker-compose.yml file

version: "2.1"
services:    
  unifi-app:
    image: lscr.io/linuxserver/unifi-network-application:latest
    container_name: unifi-app
    depends_on:
      - unifi-db
    networks:
      - unifi-net
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Prague
      - MONGO_USER=unifi
      - MONGO_PASS=unifi
      - MONGO_HOST=unifi-db
      - MONGO_PORT=27017
      - MONGO_DBNAME=unifi
    volumes:
      - /volume1/docker/unifi-app/app:/config
    ports:
      - 8443:8443
      - 3478:3478/udp
      - 10001:10001/udp
      - 8080:8080
#      - 1900:1900/udp #optional
      - 8843:8843 #optional
      - 8880:8880 #optional
      - 6789:6789 #optional
      - 5514:5514/udp #optional
    restart: unless-stopped

  unifi-db:
    image: mongo:4.4.25
    container_name: unifi-db
    networks:
      - unifi-net
    ports:
     - 27017:27017
    volumes:
      - /volume1/docker/unifi-app/db/data:/data/db
      - /volume1/docker/unifi-app/db/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
      - /volume1/docker/unifi-app/db/config:/data/configdb
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 20s

networks:
  unifi-net:
    driver: bridge

init-mongo.js

db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi"}]});
db.getSiblingDB("unifi_stat").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi_stat"}]});

Expected Behavior

unif-app conects to unifi-db and Web-UI is loaded.

Steps To Reproduce

  1. Created Stack App Template in Portainer using the docker-compose.yml above
  2. Created init-mongo.js in a directory on NAS.
  3. Deployed the Stack without any issues. Both Containers shows green check in Containers.
  4. DB container shows that user and databases are created and container successfully stores data in provided volumes.
  5. APP container is able to create two empty folders "data" and "logs" in provided directory on NAS and is displaying *** Defined MONGO_HOST unifi-db is not reachable, cannot proceed. *** in container Logs.

Environment

- OS: Unix, Synology DS-720+, x86-64bit 
- How docker service was installed: Through App Template using supplied docker-compose.yml in Portainer 2.19.3 EE.

CPU architecture

x86-64

Docker creation

Used App Template deploy in Portainer 2.19.3 EE.

Container logs

[migrations] started
[migrations] no migrations found
usermod: no changes
───────────────────────────────────────
      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 
   Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID:    1000
User GID:    1000
───────────────────────────────────────
*** Waiting for MONGO_HOST unifi-db to be reachable. ***
*** Defined MONGO_HOST unifi-db is not reachable, cannot proceed. ***
j0nnymoe commented 1 year ago

Have you confirmed your mongo db container is running correctly?

myazaki commented 1 year ago

Have you confirmed your mongo db container is running correctly?

It does not seem to be any issue there at least from the logs.

Screenshot 2023-11-27 at 14 21 13

Mongo DB Logs ``` WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that! see https://jira.mongodb.org/browse/SERVER-54407 see also https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2 see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814 about to fork child process, waiting until server is ready for connections. forked process: 28 t={"$date":"2023-11-27T12:52:59.665+00:00"} s=I c=CONTROL id=20698 ctx=main msg=***** SERVER RESTARTED ***** t={"$date":"2023-11-27T12:52:59.667+00:00"} s=I c=CONTROL id=23285 ctx=main msg=Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' t={"$date":"2023-11-27T12:52:59.671+00:00"} s=I c=NETWORK id=4648601 ctx=main msg=Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize. t={"$date":"2023-11-27T12:52:59.672+00:00"} s=I c=STORAGE id=4615611 ctx=initandlisten msg=MongoDB starting attr={"pid":28,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"37804c9ec93c"} t={"$date":"2023-11-27T12:52:59.672+00:00"} s=I c=CONTROL id=23403 ctx=initandlisten msg=Build Info attr={"buildInfo":{"version":"4.4.25","gitVersion":"3e18c4c56048ddf22a6872edc111b542521ad1d5","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}} t={"$date":"2023-11-27T12:52:59.672+00:00"} s=I c=CONTROL id=51765 ctx=initandlisten msg=Operating System attr={"os":{"name":"Ubuntu","version":"20.04"}} t={"$date":"2023-11-27T12:52:59.672+00:00"} s=I c=CONTROL id=21951 ctx=initandlisten msg=Options set by command line attr={"options":{"net":{"bindIp":"127.0.0.1","port":27017,"tls":{"mode":"disabled"}},"processManagement":{"fork":true,"pidFilePath":"/tmp/docker-entrypoint-temp-mongod.pid"},"systemLog":{"destination":"file","logAppend":true,"path":"/proc/1/fd/1"}}} t={"$date":"2023-11-27T12:52:59.673+00:00"} s=I c=STORAGE id=22315 ctx=initandlisten msg=Opening WiredTiger attr={"config":"create,cache_size=8406M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"} t={"$date":"2023-11-27T12:53:03.054+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089583:54485][28:0x7f3e05674cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"} t={"$date":"2023-11-27T12:53:03.054+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089583:54652][28:0x7f3e05674cc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"} t={"$date":"2023-11-27T12:53:03.795+00:00"} s=I c=STORAGE id=4795906 ctx=initandlisten msg=WiredTiger opened attr={"durationMillis":4122} t={"$date":"2023-11-27T12:53:03.795+00:00"} s=I c=RECOVERY id=23987 ctx=initandlisten msg=WiredTiger recoveryTimestamp attr={"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:05.363+00:00"} s=I c=STORAGE id=22262 ctx=initandlisten msg=Timestamp monitor starting t={"$date":"2023-11-27T12:53:06.889+00:00"} s=W c=CONTROL id=22120 ctx=initandlisten msg=Access control is not enabled for the database. Read and write access to data and configuration is unrestricted tags=["startupWarnings"] t={"$date":"2023-11-27T12:53:06.889+00:00"} s=I c=STORAGE id=20320 ctx=initandlisten msg=createCollection attr={"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"6467720b-2d6f-4fba-a830-f1ce9f8b947c"}},"options":{"uuid":{"$uuid":"6467720b-2d6f-4fba-a830-f1ce9f8b947c"}}} t={"$date":"2023-11-27T12:53:09.442+00:00"} s=I c=INDEX id=20345 ctx=initandlisten msg=Index build: done building attr={"buildUUID":null,"namespace":"admin.system.version","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:09.442+00:00"} s=I c=COMMAND id=20459 ctx=initandlisten msg=Setting featureCompatibilityVersion attr={"newVersion":"4.4"} t={"$date":"2023-11-27T12:53:09.443+00:00"} s=I c=STORAGE id=20536 ctx=initandlisten msg=Flow Control is enabled on this deployment t={"$date":"2023-11-27T12:53:09.444+00:00"} s=I c=STORAGE id=20320 ctx=initandlisten msg=createCollection attr={"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"8bc81d18-48de-4a7e-bf85-e93c2f33fba6"}},"options":{"capped":true,"size":10485760}} t={"$date":"2023-11-27T12:53:11.044+00:00"} s=I c=INDEX id=20345 ctx=initandlisten msg=Index build: done building attr={"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:11.045+00:00"} s=I c=FTDC id=20625 ctx=initandlisten msg=Initializing full-time diagnostic data capture attr={"dataDirectory":"/data/db/diagnostic.data"} t={"$date":"2023-11-27T12:53:11.045+00:00"} s=I c=REPL id=6015317 ctx=initandlisten msg=Setting new configuration state attr={"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"} t={"$date":"2023-11-27T12:53:11.047+00:00"} s=I c=NETWORK id=23015 ctx=listener msg=Listening on attr={"address":"/tmp/mongodb-27017.sock"} t={"$date":"2023-11-27T12:53:11.047+00:00"} s=I c=NETWORK id=23015 ctx=listener msg=Listening on attr={"address":"127.0.0.1"} t={"$date":"2023-11-27T12:53:11.047+00:00"} s=I c=NETWORK id=23016 ctx=listener msg=Waiting for connections attr={"port":27017,"ssl":"off"} child process started successfully, parent exiting t={"$date":"2023-11-27T12:53:11.099+00:00"} s=I c=STORAGE id=20320 ctx=LogicalSessionCacheRefresh msg=createCollection attr={"namespace":"config.system.sessions","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"c67db0f2-4088-40f8-b926-d54d08833bd6"}},"options":{}} t={"$date":"2023-11-27T12:53:11.099+00:00"} s=I c=CONTROL id=20712 ctx=LogicalSessionCacheReap msg=Sessions collection is not set up; waiting until next sessions reap interval attr={"error":"NamespaceNotFound: config.system.sessions does not exist"} t={"$date":"2023-11-27T12:53:11.704+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33900","connectionId":1,"connectionCount":1} t={"$date":"2023-11-27T12:53:11.705+00:00"} s=I c=NETWORK id=51800 ctx=conn1 msg=client metadata attr={"remote":"127.0.0.1:33900","client":"conn1","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:53:11.712+00:00"} s=I c=NETWORK id=22944 ctx=conn1 msg=Connection ended attr={"remote":"127.0.0.1:33900","connectionId":1,"connectionCount":0} /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init-mongo.js t={"$date":"2023-11-27T12:53:11.797+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33902","connectionId":2,"connectionCount":1} t={"$date":"2023-11-27T12:53:11.797+00:00"} s=I c=NETWORK id=51800 ctx=conn2 msg=client metadata attr={"remote":"127.0.0.1:33902","client":"conn2","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:53:11.848+00:00"} s=I c=STORAGE id=20320 ctx=conn2 msg=createCollection attr={"namespace":"admin.system.users","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"0be9c677-f746-43b3-a3fd-c19cc5a239df"}},"options":{}} t={"$date":"2023-11-27T12:53:15.860+00:00"} s=I c=INDEX id=20345 ctx=LogicalSessionCacheRefresh msg=Index build: done building attr={"buildUUID":null,"namespace":"config.system.sessions","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:15.860+00:00"} s=I c=INDEX id=20345 ctx=LogicalSessionCacheRefresh msg=Index build: done building attr={"buildUUID":null,"namespace":"config.system.sessions","index":"lsidTTLIndex","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:15.860+00:00"} s=I c=COMMAND id=51803 ctx=LogicalSessionCacheRefresh msg=Slow query attr={"type":"command","ns":"config.system.sessions","command":{"createIndexes":"system.sessions","indexes":[{"key":{"lastUse":1},"name":"lsidTTLIndex","expireAfterSeconds":1800}],"writeConcern":{},"$db":"config"},"numYields":0,"reslen":114,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":5}},"FeatureCompatibilityVersion":{"acquireCount":{"r":2,"w":3}},"ReplicationStateTransition":{"acquireCount":{"w":5}},"Global":{"acquireCount":{"r":2,"w":3}},"Database":{"acquireCount":{"r":2,"w":3}},"Collection":{"acquireCount":{"r":3,"w":2}},"Mutex":{"acquireCount":{"r":6}}},"flowControl":{"acquireCount":1,"timeAcquiringMicros":2},"storage":{},"protocol":"op_msg","durationMillis":4760} t={"$date":"2023-11-27T12:53:17.268+00:00"} s=I c=INDEX id=20345 ctx=conn2 msg=Index build: done building attr={"buildUUID":null,"namespace":"admin.system.users","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:17.268+00:00"} s=I c=INDEX id=20345 ctx=conn2 msg=Index build: done building attr={"buildUUID":null,"namespace":"admin.system.users","index":"user_1_db_1","commitTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:17.268+00:00"} s=I c=COMMAND id=51803 ctx=conn2 msg=Slow query attr={"type":"command","ns":"admin.system.users","appName":"MongoDB Shell","command":{"insert":"system.users","bypassDocumentValidation":false,"ordered":true,"$db":"admin"},"ninserted":1,"keysInserted":2,"numYields":0,"reslen":45,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":5}},"FeatureCompatibilityVersion":{"acquireCount":{"r":2,"w":3}},"ReplicationStateTransition":{"acquireCount":{"w":5}},"Global":{"acquireCount":{"r":2,"w":3}},"Database":{"acquireCount":{"r":2,"W":3}},"Collection":{"acquireCount":{"r":1,"w":3}},"Mutex":{"acquireCount":{"r":5}}},"flowControl":{"acquireCount":4,"timeAcquiringMicros":3},"storage":{},"protocol":"op_msg","durationMillis":5419} t={"$date":"2023-11-27T12:53:17.268+00:00"} s=I c=COMMAND id=51803 ctx=conn2 msg=Slow query attr={"type":"command","ns":"unifi.$cmd","appName":"MongoDB Shell","command":{"createUser":"unifi","pwd":"xxx","roles":[{"role":"dbOwner","db":"unifi"}],"digestPassword":true,"writeConcern":{"w":"majority","wtimeout":600000},"lsid":{"id":{"$uuid":"0fb86b3e-e1a7-42ff-8007-9ee0be01262b"}},"$db":"unifi"},"numYields":0,"reslen":38,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":6}},"FeatureCompatibilityVersion":{"acquireCount":{"r":3,"w":4}},"ReplicationStateTransition":{"acquireCount":{"w":7}},"Global":{"acquireCount":{"r":3,"w":4}},"Database":{"acquireCount":{"r":2,"W":4}},"Collection":{"acquireCount":{"r":1,"w":4}},"Mutex":{"acquireCount":{"r":6}}},"flowControl":{"acquireCount":4,"timeAcquiringMicros":3},"writeConcern":{"w":"majority","wtimeout":600000,"provenance":"clientSupplied"},"storage":{},"protocol":"op_msg","durationMillis":5465} Successfully added user: { "user" : "unifi", "roles" : [ { "role" : "dbOwner", "db" : "unifi" } ] } Successfully added user: { "user" : "unifi", "roles" : [ { "role" : "dbOwner", "db" : "unifi_stat" } ] } t={"$date":"2023-11-27T12:53:17.316+00:00"} s=I c=NETWORK id=22944 ctx=conn2 msg=Connection ended attr={"remote":"127.0.0.1:33902","connectionId":2,"connectionCount":0} t={"$date":"2023-11-27T12:53:17.344+00:00"} s=I c=CONTROL id=20698 ctx=main msg=***** SERVER RESTARTED ***** t={"$date":"2023-11-27T12:53:17.347+00:00"} s=I c=CONTROL id=23285 ctx=main msg=Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' t={"$date":"2023-11-27T12:53:17.348+00:00"} s=I c=NETWORK id=4648601 ctx=main msg=Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize. killing process with pid: 28 t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=CONTROL id=23377 ctx=SignalHandler msg=Received signal attr={"signal":15,"error":"Terminated"} t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=CONTROL id=23378 ctx=SignalHandler msg=Signal was sent by kill(2) attr={"pid":85,"uid":999} t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=CONTROL id=23381 ctx=SignalHandler msg=will terminate after current cmd ends t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=REPL id=4784900 ctx=SignalHandler msg=Stepping down the ReplicationCoordinator for shutdown attr={"waitTimeMillis":10000} t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=COMMAND id=4784901 ctx=SignalHandler msg=Shutting down the MirrorMaestro t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=SHARDING id=4784902 ctx=SignalHandler msg=Shutting down the WaitForMajorityService t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=CONTROL id=4784903 ctx=SignalHandler msg=Shutting down the LogicalSessionCache t={"$date":"2023-11-27T12:53:17.349+00:00"} s=I c=NETWORK id=20562 ctx=SignalHandler msg=Shutdown: going to close listening sockets t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=NETWORK id=23017 ctx=listener msg=removing socket file attr={"path":"/tmp/mongodb-27017.sock"} t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=NETWORK id=4784905 ctx=SignalHandler msg=Shutting down the global connection pool t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784906 ctx=SignalHandler msg=Shutting down the FlowControlTicketholder t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=- id=20520 ctx=SignalHandler msg=Stopping further Flow Control ticket acquisitions. t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784908 ctx=SignalHandler msg=Shutting down the PeriodicThreadToAbortExpiredTransactions t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784934 ctx=SignalHandler msg=Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=REPL id=4784909 ctx=SignalHandler msg=Shutting down the ReplicationCoordinator t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=SHARDING id=4784910 ctx=SignalHandler msg=Shutting down the ShardingInitializationMongoD t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=REPL id=4784911 ctx=SignalHandler msg=Enqueuing the ReplicationStateTransitionLock for shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=- id=4784912 ctx=SignalHandler msg=Killing all operations for shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=- id=4695300 ctx=SignalHandler msg=Interrupted all currently running operations attr={"opsKilled":3} t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=COMMAND id=4784913 ctx=SignalHandler msg=Shutting down all open transactions t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=REPL id=4784914 ctx=SignalHandler msg=Acquiring the ReplicationStateTransitionLock for shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=INDEX id=4784915 ctx=SignalHandler msg=Shutting down the IndexBuildsCoordinator t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=REPL id=4784916 ctx=SignalHandler msg=Reacquiring the ReplicationStateTransitionLock for shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=REPL id=4784917 ctx=SignalHandler msg=Attempting to mark clean shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=NETWORK id=4784918 ctx=SignalHandler msg=Shutting down the ReplicaSetMonitor t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=SHARDING id=4784921 ctx=SignalHandler msg=Shutting down the MigrationUtilExecutor t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784927 ctx=SignalHandler msg=Shutting down the HealthLog t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784929 ctx=SignalHandler msg=Acquiring the global lock for shutdown t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=4784930 ctx=SignalHandler msg=Shutting down the storage engine t={"$date":"2023-11-27T12:53:17.350+00:00"} s=I c=STORAGE id=22320 ctx=SignalHandler msg=Shutting down journal flusher thread t={"$date":"2023-11-27T12:53:17.442+00:00"} s=I c=STORAGE id=22263 ctx=TimestampMonitor msg=Timestamp monitor is stopping attr={"error":"interrupted at shutdown"} t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22321 ctx=SignalHandler msg=Finished shutting down journal flusher thread t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=20282 ctx=SignalHandler msg=Deregistering all the collections t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22261 ctx=SignalHandler msg=Timestamp monitor shutting down t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22317 ctx=SignalHandler msg=WiredTigerKVEngine shutting down t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22318 ctx=SignalHandler msg=Shutting down session sweeper thread t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22319 ctx=SignalHandler msg=Finished shutting down session sweeper thread t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22322 ctx=SignalHandler msg=Shutting down checkpoint thread t={"$date":"2023-11-27T12:53:17.853+00:00"} s=I c=STORAGE id=22323 ctx=SignalHandler msg=Finished shutting down checkpoint thread t={"$date":"2023-11-27T12:53:17.994+00:00"} s=I c=STORAGE id=4795902 ctx=SignalHandler msg=Closing WiredTiger attr={"closeConfig":"leak_memory=true,"} t={"$date":"2023-11-27T12:53:17.996+00:00"} s=I c=STORAGE id=22430 ctx=SignalHandler msg=WiredTiger message attr={"message":"[1701089597:996744][28:0x7f3e05673700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 49, snapshot max: 49 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"} t={"$date":"2023-11-27T12:53:20.948+00:00"} s=I c=STORAGE id=4795901 ctx=SignalHandler msg=WiredTiger closed attr={"durationMillis":2954} t={"$date":"2023-11-27T12:53:20.948+00:00"} s=I c=STORAGE id=22279 ctx=SignalHandler msg=shutdown: removing fs lock... t={"$date":"2023-11-27T12:53:20.948+00:00"} s=I c=- id=4784931 ctx=SignalHandler msg=Dropping the scope cache for shutdown t={"$date":"2023-11-27T12:53:20.949+00:00"} s=I c=FTDC id=4784926 ctx=SignalHandler msg=Shutting down full-time data capture t={"$date":"2023-11-27T12:53:20.949+00:00"} s=I c=FTDC id=20626 ctx=SignalHandler msg=Shutting down full-time diagnostic data capture t={"$date":"2023-11-27T12:53:20.950+00:00"} s=I c=CONTROL id=20565 ctx=SignalHandler msg=Now exiting t={"$date":"2023-11-27T12:53:20.950+00:00"} s=I c=CONTROL id=23138 ctx=SignalHandler msg=Shutting down attr={"exitCode":0} MongoDB init process complete; ready for start up. t={"$date":"2023-11-27T12:53:21.383+00:00"} s=I c=CONTROL id=23285 ctx=main msg=Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' t={"$date":"2023-11-27T12:53:21.385+00:00"} s=I c=NETWORK id=4648601 ctx=main msg=Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize. t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=STORAGE id=4615611 ctx=initandlisten msg=MongoDB starting attr={"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"37804c9ec93c"} t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=CONTROL id=23403 ctx=initandlisten msg=Build Info attr={"buildInfo":{"version":"4.4.25","gitVersion":"3e18c4c56048ddf22a6872edc111b542521ad1d5","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}} t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=CONTROL id=51765 ctx=initandlisten msg=Operating System attr={"os":{"name":"Ubuntu","version":"20.04"}} t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=CONTROL id=21951 ctx=initandlisten msg=Options set by command line attr={"options":{"net":{"bindIp":"*"}}} t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=STORAGE id=22270 ctx=initandlisten msg=Storage engine to use detected by data files attr={"dbpath":"/data/db","storageEngine":"wiredTiger"} t={"$date":"2023-11-27T12:53:21.386+00:00"} s=I c=STORAGE id=22315 ctx=initandlisten msg=Opening WiredTiger attr={"config":"create,cache_size=8406M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"} t={"$date":"2023-11-27T12:53:22.729+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089602:729042][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"} t={"$date":"2023-11-27T12:53:22.849+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089602:849599][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"} t={"$date":"2023-11-27T12:53:22.970+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089602:970959][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/32768 to 2/256"} t={"$date":"2023-11-27T12:53:23.112+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089603:112546][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"} t={"$date":"2023-11-27T12:53:23.269+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089603:269372][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"} t={"$date":"2023-11-27T12:53:23.341+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089603:341706][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"} t={"$date":"2023-11-27T12:53:23.341+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089603:341791][1:0x7f1230aaecc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"} t={"$date":"2023-11-27T12:53:23.343+00:00"} s=I c=STORAGE id=22430 ctx=initandlisten msg=WiredTiger message attr={"message":"[1701089603:343316][1:0x7f1230aaecc0], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7"} t={"$date":"2023-11-27T12:53:24.309+00:00"} s=I c=STORAGE id=4795906 ctx=initandlisten msg=WiredTiger opened attr={"durationMillis":2923} t={"$date":"2023-11-27T12:53:24.309+00:00"} s=I c=RECOVERY id=23987 ctx=initandlisten msg=WiredTiger recoveryTimestamp attr={"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}} t={"$date":"2023-11-27T12:53:24.315+00:00"} s=I c=STORAGE id=22262 ctx=initandlisten msg=Timestamp monitor starting t={"$date":"2023-11-27T12:53:24.375+00:00"} s=W c=CONTROL id=22120 ctx=initandlisten msg=Access control is not enabled for the database. Read and write access to data and configuration is unrestricted tags=["startupWarnings"] t={"$date":"2023-11-27T12:53:24.382+00:00"} s=I c=STORAGE id=20536 ctx=initandlisten msg=Flow Control is enabled on this deployment t={"$date":"2023-11-27T12:53:24.386+00:00"} s=I c=FTDC id=20625 ctx=initandlisten msg=Initializing full-time diagnostic data capture attr={"dataDirectory":"/data/db/diagnostic.data"} t={"$date":"2023-11-27T12:53:24.386+00:00"} s=I c=REPL id=6015317 ctx=initandlisten msg=Setting new configuration state attr={"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"} t={"$date":"2023-11-27T12:53:24.388+00:00"} s=I c=NETWORK id=23015 ctx=listener msg=Listening on attr={"address":"/tmp/mongodb-27017.sock"} t={"$date":"2023-11-27T12:53:24.388+00:00"} s=I c=NETWORK id=23015 ctx=listener msg=Listening on attr={"address":"0.0.0.0"} t={"$date":"2023-11-27T12:53:24.388+00:00"} s=I c=NETWORK id=23016 ctx=listener msg=Waiting for connections attr={"port":27017,"ssl":"off"} t={"$date":"2023-11-27T12:53:31.390+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33934","connectionId":1,"connectionCount":1} t={"$date":"2023-11-27T12:53:31.390+00:00"} s=I c=NETWORK id=51800 ctx=conn1 msg=client metadata attr={"remote":"127.0.0.1:33934","client":"conn1","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:53:31.397+00:00"} s=I c=NETWORK id=22944 ctx=conn1 msg=Connection ended attr={"remote":"127.0.0.1:33934","connectionId":1,"connectionCount":0} t={"$date":"2023-11-27T12:53:41.954+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33948","connectionId":2,"connectionCount":1} t={"$date":"2023-11-27T12:53:41.955+00:00"} s=I c=NETWORK id=51800 ctx=conn2 msg=client metadata attr={"remote":"127.0.0.1:33948","client":"conn2","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:53:41.961+00:00"} s=I c=NETWORK id=22944 ctx=conn2 msg=Connection ended attr={"remote":"127.0.0.1:33948","connectionId":2,"connectionCount":0} t={"$date":"2023-11-27T12:53:52.671+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33960","connectionId":3,"connectionCount":1} t={"$date":"2023-11-27T12:53:52.671+00:00"} s=I c=NETWORK id=51800 ctx=conn3 msg=client metadata attr={"remote":"127.0.0.1:33960","client":"conn3","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:53:52.678+00:00"} s=I c=NETWORK id=22944 ctx=conn3 msg=Connection ended attr={"remote":"127.0.0.1:33960","connectionId":3,"connectionCount":0} t={"$date":"2023-11-27T12:54:03.309+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33980","connectionId":4,"connectionCount":1} t={"$date":"2023-11-27T12:54:03.310+00:00"} s=I c=NETWORK id=51800 ctx=conn4 msg=client metadata attr={"remote":"127.0.0.1:33980","client":"conn4","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:54:03.316+00:00"} s=I c=NETWORK id=22944 ctx=conn4 msg=Connection ended attr={"remote":"127.0.0.1:33980","connectionId":4,"connectionCount":0} t={"$date":"2023-11-27T12:54:13.929+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33990","connectionId":5,"connectionCount":1} t={"$date":"2023-11-27T12:54:13.929+00:00"} s=I c=NETWORK id=51800 ctx=conn5 msg=client metadata attr={"remote":"127.0.0.1:33990","client":"conn5","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:54:13.936+00:00"} s=I c=NETWORK id=22944 ctx=conn5 msg=Connection ended attr={"remote":"127.0.0.1:33990","connectionId":5,"connectionCount":0} t={"$date":"2023-11-27T12:54:24.317+00:00"} s=I c=STORAGE id=22430 ctx=WTCheckpointThread msg=WiredTiger message attr={"message":"[1701089664:317463][1:0x7f1229a9f700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 3, snapshot max: 3 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7"} t={"$date":"2023-11-27T12:54:24.470+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:33998","connectionId":6,"connectionCount":1} t={"$date":"2023-11-27T12:54:24.471+00:00"} s=I c=NETWORK id=51800 ctx=conn6 msg=client metadata attr={"remote":"127.0.0.1:33998","client":"conn6","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:54:24.477+00:00"} s=I c=NETWORK id=22944 ctx=conn6 msg=Connection ended attr={"remote":"127.0.0.1:33998","connectionId":6,"connectionCount":0} t={"$date":"2023-11-27T12:54:35.177+00:00"} s=I c=NETWORK id=22943 ctx=listener msg=Connection accepted attr={"remote":"127.0.0.1:34014","connectionId":7,"connectionCount":1} t={"$date":"2023-11-27T12:54:35.177+00:00"} s=I c=NETWORK id=51800 ctx=conn7 msg=client metadata attr={"remote":"127.0.0.1:34014","client":"conn7","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.25"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"20.04"}}} t={"$date":"2023-11-27T12:54:35.184+00:00"} s=I c=NETWORK id=22944 ctx=conn7 msg=Connection ended attr={"remote":"127.0.0.1:34014","connectionId":7,"connectionCount":0} ```
j0nnymoe commented 1 year ago

If you restart the unifi container after my mongodb has started up, does it work?

myazaki commented 1 year ago

If you restart the unifi container after my mongodb has started up, does it work?

That is something which came to my mind too once I got to the thread where this topic was discussed. Tried that and did not helped and it did not help with the config above as well. Still the same error in unifi-app log *** Defined MONGO_HOST unifi-db is not reachable, cannot proceed. ***

myazaki commented 1 year ago

I also checked the password special characters thread, so I made sure the password for MongoDB only includes upper and lower case alphabet characters and numbers to prevent some encoding problems.

hwcltjn commented 1 year ago

I'm having the exact same issue and seem to have done everything correctly.

Thunder7ga commented 12 months ago

Same issue here, when trying to setup the new container to ensure I can keep moving forward with updates in 2024. The whole MongonDB thing makes little sense currently how to get it work with this new Unifi docker stuff.

shirshir commented 12 months ago

I had the same issue but got it working with the following docker-compose.yml on a Synology DS1019+ (only changed the password to something else):

---
version: "2.1"
services:
  unifi-network-application:
    image: lscr.io/linuxserver/unifi-network-application:latest
    container_name: unifi-network-application
    depends_on:
      - unifi-db
    environment:
      - PUID=1026
      - PGID=101
      - TZ=Europe/Amsterdam
      - MONGO_USER=unifi
      - MONGO_PASS=unifi
      - MONGO_HOST=unifi-db
      - MONGO_PORT=27017
      - MONGO_DBNAME=unifi
#      - MEM_LIMIT=1024     # optional
#      - MEM_STARTUP=1024   # optional
#      - MONGO_TLS=         # optional
#      - MONGO_AUTHSOURCE=  # optional
    volumes:
      - ./config:/config
    ports:
      - 3478:3478/udp   # Unifi STUN port
      - 10001:10001/udp # Required for AP discovery
      - 8080:8080       # Required for device communication
      - 8081:8081
      - 8443:8443       # Unifi web admin port
#      - 1900:1900/udp  # optional Required for Make controller discoverable on L2 network option
      - 8843:8843       # Unifi guest portal HTTPS redirect port
      - 8880:8880       # Unifi guest portal HTTP redirect port
      - 6789:6789       # For mobile throughput test
      - 5514:5514/udp   # optional Remote syslog port
    restart: unless-stopped
  unifi-db:
#   WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
    image: docker.io/mongo:4
    container_name: unifi-db
    volumes:
      - ./mongodb/data:/data/db
      - ./mongodb/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 20s

init-mongo.js:

db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi"}]});
db.getSiblingDB("unifi_stat").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi_stat"}]})

And I imported a backup from my old 7.3 controller, it seems to work fine.

CDRX2 commented 12 months ago

I had the same issue but got it working with the following docker-compose.yml on a Synology DS1019+ (only changed the password to something else):

---
version: "2.1"
services:
  unifi-network-application:
    image: lscr.io/linuxserver/unifi-network-application:latest
    container_name: unifi-network-application
    depends_on:
      - unifi-db
    environment:
      - PUID=1026
      - PGID=101
      - TZ=Europe/Amsterdam
      - MONGO_USER=unifi
      - MONGO_PASS=unifi
      - MONGO_HOST=unifi-db
      - MONGO_PORT=27017
      - MONGO_DBNAME=unifi
#      - MEM_LIMIT=1024     # optional
#      - MEM_STARTUP=1024   # optional
#      - MONGO_TLS=         # optional
#      - MONGO_AUTHSOURCE=  # optional
    volumes:
      - ./config:/config
    ports:
      - 3478:3478/udp   # Unifi STUN port
      - 10001:10001/udp # Required for AP discovery
      - 8080:8080       # Required for device communication
      - 8081:8081
      - 8443:8443       # Unifi web admin port
#      - 1900:1900/udp  # optional Required for Make controller discoverable on L2 network option
      - 8843:8843       # Unifi guest portal HTTPS redirect port
      - 8880:8880       # Unifi guest portal HTTP redirect port
      - 6789:6789       # For mobile throughput test
      - 5514:5514/udp   # optional Remote syslog port
    restart: unless-stopped
  unifi-db:
#   WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
    image: docker.io/mongo:4
    container_name: unifi-db
    volumes:
      - ./mongodb/data:/data/db
      - ./mongodb/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 20s

init-mongo.js:

db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi"}]});
db.getSiblingDB("unifi_stat").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi_stat"}]})

And I imported a backup from my old 7.3 controller, it seems to work fine.

That just worked for me, thanks a lot!

myazaki commented 12 months ago

I had the same issue but got it working with the following docker-compose.yml on a Synology DS1019+ (only changed the password to something else):


---

version: "2.1"

services:

  unifi-network-application:

    image: lscr.io/linuxserver/unifi-network-application:latest

    container_name: unifi-network-application

    depends_on:

      - unifi-db

    environment:

      - PUID=1026

      - PGID=101

      - TZ=Europe/Amsterdam

      - MONGO_USER=unifi

      - MONGO_PASS=unifi

      - MONGO_HOST=unifi-db

      - MONGO_PORT=27017

      - MONGO_DBNAME=unifi

#      - MEM_LIMIT=1024     # optional

#      - MEM_STARTUP=1024   # optional

#      - MONGO_TLS=         # optional

#      - MONGO_AUTHSOURCE=  # optional

    volumes:

      - ./config:/config

    ports:

      - 3478:3478/udp   # Unifi STUN port

      - 10001:10001/udp # Required for AP discovery

      - 8080:8080       # Required for device communication

      - 8081:8081

      - 8443:8443       # Unifi web admin port

#      - 1900:1900/udp  # optional Required for Make controller discoverable on L2 network option

      - 8843:8843       # Unifi guest portal HTTPS redirect port

      - 8880:8880       # Unifi guest portal HTTP redirect port

      - 6789:6789       # For mobile throughput test

      - 5514:5514/udp   # optional Remote syslog port

    restart: unless-stopped

  unifi-db:

#   WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!

    image: docker.io/mongo:4

    container_name: unifi-db

    volumes:

      - ./mongodb/data:/data/db

      - ./mongodb/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro

    restart: unless-stopped

    healthcheck:

      test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]

      interval: 10s

      timeout: 10s

      retries: 5

      start_period: 20s

init-mongo.js:


db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi"}]});

db.getSiblingDB("unifi_stat").createUser({user: "unifi", pwd: "unifi", roles: [{role: "dbOwner", db: "unifi_stat"}]})

And I imported a backup from my old 7.3 controller, it seems to work fine.

So you just replaced the password and that is it?

shirshir commented 12 months ago

So you just replaced the password and that is it?

Sorry, I meant the password in the docker-compose.yml is not the one I actually use. Just make sure it is the same as in the init-mongo.js file. And also make sure to use the correct PUID and PGID values, see the docs for more info.

myazaki commented 12 months ago

So you just replaced the password and that is it?

Sorry, I meant the password in the docker-compose.yml is not the one I actually use. Just make sure it is the same as in the init-mongo.js file. And also make sure to use the correct PUID and PGID values, see the docs for more info.

Hi thanx, so far not much successful. I noticed only (correct me if I am wrong) 4 differences you have in you compose file comparing to mine.

  1. PUIDPGID - Changed that to reflect my local settings on Synology, it did not make any changes.
  2. No bridge network definition - which is in contrast with other users here suggesting to define that as the default (not creating any) did not work for them. Tried that before with PUIDGUID set to 1000 and did not help. I will retest it with new PUIDPGID.
  3. No config Volume set for Mongo DB - I do not think it matters.
  4. You have Mongo DB v4 image - I did not tested it yet.
myazaki commented 12 months ago

So I tested all above and did not get into any working solution. I am still on the same MongoDB not reachable.

myazaki commented 11 months ago

Gave up trying. This pushed me to investigate installation of the app natively on RPi4. And to my surprise it went flawlessly. It took around 2 minutes.

hwcltjn commented 11 months ago

Gave up trying. This pushed me to investigate installation of the app natively on RPi4. And to my surprise it went flawlessly. It took around 2 minutes.

Same, but I tried this instead, works well! https://github.com/jacobalberty/unifi-docker

hiwanz commented 11 months ago

Is it possible to connect to a standalone instance of mongodb? how?

myazaki commented 11 months ago

Is it possible to connect to a standalone instance of mongodb? how?

yes it is possible, but I believe it is not scope of this repo.

hiwanz commented 11 months ago

Is it possible to connect to a standalone instance of mongodb? how?

yes it is possible, but I believe it is not scope of this repo.

I've found that the standalone instance is not reachable too,it's confusing

kevincw01 commented 11 months ago

this solution worked for me as well. One thing that might not be obvious is that the author has put both mongo and unifi network manager in the same docker compose so you dont need to start another docker container which is what I was attempting to do previously. Also, i recommend starting the container the first time with out -d e.g. docker-compose up. Then you will see all of the live logs. One thing that caught me is I went to the https://ip:8443 and got a 404 error. But then I noticed in the logs 30s later that the container was still building the database. If you wait a good 3-4 mins (at least on the mac mini I am using) then it comes up. I actually had the same issue with the https://github.com/jacobalberty/unifi-docker solution.

Xeevis commented 11 months ago

A bit late to the party, but seeing you are using Synology, perhaps you have enabled Firewall? This thing does pretty good job at stopping incoming connections even when containers are on the same subnet.

image

Select your Docker unifi-db listed under applications and in source IP input your unifi-net subnet made by docker. This will permit anything in unifi-net network to access unifi-db container. (Since it's just unifi-network-application you might as well just go with single IP rule or perhaps even 27017 port rule)

myazaki commented 11 months ago

A bit late to the party, but seeing you are using Synology, perhaps you have enabled Firewall? This thing does pretty good job at stopping incoming connections even when containers are on the same subnet.

image

Select your Docker unifi-db listed under applications and in source IP input your unifi-net subnet made by docker. This will permit anything in unifi-net network to access unifi-db container. (Since it's just unifi-network-application you might as well just go with single IP rule or perhaps even 27017 port rule)

Hi @Xeevis good catch. It did the trick. It did not came to my mind as I am having similar setup for other application (two containers deployed from one docker compose file in Portainer using default bridged network settings). It is not allowed in Synology Firewall, but it works, both containers are communicating on the same network.

So, I freshly deployed the unifi-network-application docker compose file once again in Portainer, it of course resulted into the same error. So I allowed the unifi-net subnet access in Synology firewall, restarted unifi-app and it connected to MongoDB. It just took about 3-5 minutes for unifi-app to initialise (showing 404 error in the browser) itself for the first time. Many thanks!

Xeevis commented 11 months ago

Glad you got it working👍. Docker implementation by Synology sometimes works in strange ways. Bridged network might be whitelisted by default while custom networks are unknown to the code and firewall won't permit any cross-container communication.

Also watch out for changing IPs, just stopped/started stack in Portainer and it recreated the network and changed the subnet IP.

myazaki commented 11 months ago

Glad you got it working👍. Docker implementation by Synology sometimes works in strange ways. Bridged network might be whitelisted by default while custom networks are unknown to the code and firewall won't permit any cross-container communication.

Also watch out for changing IPs, just stopped/started stack in Portainer and it recreated the network and changed the subnet IP.

Good to know definitely for other Docker deployments on Synology as well 👍. Anyway, I gave up on this setup and transitioned to RPi4 install. Btw not easy too, as you need to figure out yourself that it won't work on MongoDB 4.4.19 and higher due to processor requirements. Now waiting for my UDM-SE device, so will drop the RPi4 install as well.

Xeevis commented 11 months ago

Anyway, I gave up on this setup and transitioned to RPi4 install. Btw not easy too, as you need to figure out yourself that it won't work on MongoDB 4.4.19 and higher due to processor requirements. Now waiting for my UDM-SE device, so will drop the RPi4 install as well.

AVX support is a requirement from MongoDB 5.0+ and as stated in README.md formally only mongodb 3.6 through 4.4 are supported by this repository, so it shouldn't be an issue for the time being. UDM is one way to solve this, but quite pricey too and I'm not aware of any benefits for SOHO application where I have couple Lite APs, PoE switches and UXG-Lite gateway.

myazaki commented 11 months ago

Anyway, I gave up on this setup and transitioned to RPi4 install. Btw not easy too, as you need to figure out yourself that it won't work on MongoDB 4.4.19 and higher due to processor requirements. Now waiting for my UDM-SE device, so will drop the RPi4 install as well.

AVX support is a requirement from MongoDB 5.0+ and as stated in README.md formally only mongodb 3.6 through 4.4 are supported by this repository, so it shouldn't be an issue for the time being. UDM is one way to solve this, but quite pricey too and I'm not aware of any benefits for SOHO application where I have couple Lite APs, PoE switches and UXG-Lite gateway.

I am aware about AVX requirement for MongoDB 5.0+. But there seems to be also ARMv8.2-A requirement for mongo 4.4.19 and higher. That is why 4.4.18 version is the last one working on RPi4. It is also mentioned here https://github.com/linuxserver/docker-unifi-network-application/issues/4. You are of course right mentioning only the small benefits of using UDM in SOHO application, it is a matter of personal preference.

zmdtk commented 2 months ago

A bit late to the party, but seeing you are using Synology, perhaps you have enabled Firewall? This thing does pretty good job at stopping incoming connections even when containers are on the same subnet.

image

Select your Docker unifi-db listed under applications and in source IP input your unifi-net subnet made by docker. This will permit anything in unifi-net network to access unifi-db container. (Since it's just unifi-network-application you might as well just go with single IP rule or perhaps even 27017 port rule)

I also got it working. Thanks to Xeevis. That is the Synology Firewall problem.