jacobalberty / unifi-docker

Unifi Docker files
MIT License
2.16k stars 456 forks source link

Login credentials don't work after restarting container #619

Closed hawkinspeter closed 1 year ago

hawkinspeter commented 1 year ago

Operating system

Ubuntu 22.04.1 LTS

UniFi Tag

v7.3.76

Docker run

version: '3.7' services: mongo: image: mongo:3.6 container_name: ${COMPOSE_PROJECT_NAME}_mongo networks: - unifi restart: always volumes: - unifi:/data controller: #image: "jacobalberty/unifi:${TAG:-latest}" image: "jacobalberty/unifi:v7.3.76" container_name: ${COMPOSE_PROJECT_NAME}_controller # depends_on: # - mongo init: true networks: - unifi restart: always volumes: - unifi:/unifi - unifi_run:/var/run/unifi # Mount local folder for backups and autobackups - ./backup:/unifi/data/backup user: unifi sysctls: net.ipv4.ip_unprivileged_port_start: 0 environment: DB_URI: mongodb://mongo/unifi STATDB_URI: mongodb://mongo/unifi_stat #DB_NAME: unifi ports: - "3478:3478/udp" # STUN - "6789:6789/tcp" # Speed test - "8080:8080/tcp" # Device/ controller comm. - "8443:8443/tcp" # Controller GUI/API as seen in a web browser - "8880:8880/tcp" # HTTP portal redirection - "8843:8843/tcp" # HTTPS portal redirection - "10001:10001/udp" # AP discovery logs: image: bash container_name: ${COMPOSE_PROJECT_NAME}_logs # depends_on: # - controller command: bash -c 'tail -F /unifi/log/*.log' restart: always volumes: - unifi:/unifi volumes: unifi: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi" unifi_run: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi_run" networks: unifi:

Bug description

I've noticed this a few times - after the container gets restarted the login credentials don't work.

I saw a couple of stale/closed issues regarding this and found a solution - don't override DB_NAME environment variable.

I've got this running on a RPi docker swarm using an NFS server for persistent storage and deploy it with:

docker stack deploy -c <(docker-compose config) unifi

My fixed docker-compose.yml is:

version: '3.7' services: mongo: image: mongo:3.6 container_name: ${COMPOSE_PROJECT_NAME}_mongo networks:

volumes: unifi: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi" unifi_run: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi_run"

networks: unifi:

Steps to reproduce

Override DB_NAME variable to "unifi"

Create container, setup instance (restore from backup) and wait until it's all working fine.

Stop container and restart it.

Logins no longer work

Relevant log output

No response

hawkinspeter commented 1 year ago

My fix didn't work - after the swarm relocated the image, my login doesn't work again.

Now looking at the mongo data under /usr/lib/unifi/data/db to see if I should move that to a persistent volume

Seems to be working better now, so I've removed the separate mongodb service and the bash logs one as that seemed to have issues running in swarm mode.

Here's my current docker-compose.yml:

docker stack deploy -c <(docker-compose config) unifi

version: '3.7' services: controller:

image: "jacobalberty/unifi:${TAG:-latest}"

image: "jacobalberty/unifi:v7.3.76"
container_name: ${COMPOSE_PROJECT_NAME}_controller
init: true
restart: always
volumes:
  - unifi:/unifi
  - unifi_data:/usr/lib/unifi/data
  - unifi_run:/var/run/unifi
  # Mount local folder for backups and autobackups
  - ./backup:/unifi/data/backup
user: unifi
sysctls:
  net.ipv4.ip_unprivileged_port_start: 0
ports:
  - "3478:3478/udp" # STUN
  - "6789:6789/tcp" # Speed test
  - "8080:8080/tcp" # Device/ controller comm.
  - "8443:8443/tcp" # Controller GUI/API as seen in a web browser
  - "8880:8880/tcp" # HTTP portal redirection
  - "8843:8843/tcp" # HTTPS portal redirection
  - "10001:10001/udp" # AP discovery

volumes: unifi: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi" unifi_data: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi" unifi_run: driver_opts: type: nfs o: addr=${NFS_SERVER:-192.168.1.24},${NFS_OPTS:-sync,vers=4,noatime,nodiratime,nosuid,rw} device: ":/docker/unifi/volumes/unifi_run"

vagari commented 1 year ago

I'm currently in the process of moving from a simple install on an old Mac. I haven't even gotten to the step where I'm migrating. Just importing a backup on the same, old version of Controller (v6.2.26). Noticed a few oddities, and wasn't so sure about having even my volumes so ephemeral... but after some issues decided to start going back to stock...

I too am having this issue, and I'm not in a swarm. I have the containers all spin up (docker logs doesn't work on the log container ironically, but the files seem to be there). I can import my old config. Log in , and get around my, disconnected, data. If I then do a docker compose down and try to bring it back up, I get a, "Invalid username and/or password."

vagari commented 1 year ago

Looks like I missed something with the Mongo DB. Whoops. Pulled down a fresh clone and realized I had left in /unifi from when I was on the umpteenth attempt at adjusting the volumes unsuccessfully (don't even remember adding that). My bad. Not only did it sort out the re-login issue. But the docker logs unifi_logs is now also working. (facepalm)

Apologies. Carry on...

github-actions[bot] commented 1 year ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

davidmillers34 commented 3 weeks ago

Hi Everyone Looks like I missed something with the Mongo DB. Whoops. Pulled down a fresh clone and realized I had left in /unifi from when I was on the umpteenth attempt at adjusting the volumes unsuccessfully (don't even remember adding that). My bad. Not only did it sort out the re-login issue. But the docker logs unifi_logs is now also working. (facepalm)