Open troykelly opened 1 year ago
The errors regarding
nginx: [emerg] open() "/etc/nginx/nginx/off" failed (13: Permission denied)
have been resolved in2.10.1
.Just tried a clean install with :latest & recreated my productive container with :latest. Both worked! Thanks @jc21!
s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service prepare: starting ❯ Configuring npmuser ... id: 'npmuser': no such user ❯ Checking paths ... ❯ Setting ownership ... ❯ Dynamic resolvers ... ❯ IPv6 ... Enabling IPV6 in hosts in: /etc/nginx/conf.d - /etc/nginx/conf.d/production.conf - /etc/nginx/conf.d/default.conf - /etc/nginx/conf.d/include/ip_ranges.conf - /etc/nginx/conf.d/include/proxy.conf - /etc/nginx/conf.d/include/force-ssl.conf - /etc/nginx/conf.d/include/ssl-ciphers.conf - /etc/nginx/conf.d/include/block-exploits.conf - /etc/nginx/conf.d/include/assets.conf - /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf - /etc/nginx/conf.d/include/resolvers.conf Enabling IPV6 in hosts in: /data/nginx - /data/nginx/default_host/site.conf - /data/nginx/proxy_host/10.conf - /data/nginx/proxy_host/16.conf - /data/nginx/proxy_host/18.conf - /data/nginx/proxy_host/19.conf - /data/nginx/proxy_host/12.conf - /data/nginx/proxy_host/15.conf - /data/nginx/proxy_host/14.conf - /data/nginx/proxy_host/11.conf - /data/nginx/proxy_host/13.conf ❯ Docker secrets ... ------------------------------------- _ _ ____ __ __ | \ | | _ \| \/ | | \| | |_) | |\/| | | |\ | __/| | | | |_| \_|_| |_| |_| ------------------------------------- User UID: 911 User GID: 911 ------------------------------------- s6-rc: info: service prepare successfully started s6-rc: info: service nginx: starting s6-rc: info: service frontend: starting s6-rc: info: service backend: starting s6-rc: info: service nginx successfully started s6-rc: info: service frontend successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting ❯ Starting nginx ... ❯ Starting backend ... s6-rc: info: service legacy-services successfully started [3/29/2023] [4:42:48 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [3/29/2023] [4:42:51 PM] [Migrate ] › ℹ info Current database version: none [3/29/2023] [4:43:00 PM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-cloudflare==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') cloudflare [3/29/2023] [4:43:00 PM] [Setup ] › ℹ info Logrotate Timer initialized [3/29/2023] [4:43:00 PM] [Setup ] › ℹ info Logrotate completed. [3/29/2023] [4:43:00 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services... [3/29/2023] [4:43:00 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [3/29/2023] [4:43:01 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [3/29/2023] [4:43:01 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [3/29/2023] [4:43:01 PM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [3/29/2023] [4:43:01 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry... [3/29/2023] [4:43:01 PM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [3/29/2023] [4:43:01 PM] [Global ] › ℹ info Backend PID 154 listening on port 3000 ... [3/29/2023] [4:43:03 PM] [Nginx ] › ℹ info Reloading Nginx [3/29/2023] [4:43:03 PM] [SSL ] › ℹ info Renew Complete
Can you show your compose?
Can you show your compose?
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: always
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- nginx_data:/data
- nginx_letsencrypt:/etc/letsencrypt
Mostly default as mentioned on https://nginxproxymanager.com/guide/#quick-setup
Can you show your compose?
version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: always ports: - '80:80' - '81:81' - '443:443' volumes: - nginx_data:/data - nginx_letsencrypt:/etc/letsencrypt
Mostly default as mentioned on https://nginxproxymanager.com/guide/#quick-setup
Ok, nothing special...My compose is almost the same but I don't get to run it with no errors... I hope a fix...
Ok, nothing special...My compose is almost the same but I don't get to run it with no errors... I hope a fix...
Also here are my container capabilities:
Until there is something new to test I give up :( (for now ;)
Just tested on a Synology NAS with a compose in Portainer:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: always
ports:
- '3080:80'
- '3081:81'
- '3443:443'
This runs, but... it is an old version (2.9.something).
When I do the same with the 2.10.1 version:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:2.10.1'
restart: always
ports:
- '3080:80'
- '3081:81'
- '3443:443'
I get the following when everything started:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Now... when I restart the container the following is added to the logs:
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
911
usermod: no changes
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Adding PUID and GUID to my production container before I upgrade to 2.10.1 doesn't solve the problem.
Ok so for those of you with permission denied for port 80 errors, please try the docker image tag github-uidgid
.
Also, do not specify a PUID
and PGID
environment variable. This change will revert to pre-2.10 behaviour when these are not set and will run nginx and other processes as root user.
Of course it will still have support for user/group if specified.
@oPenuiC ah yes I see what you mean. The local folder mount is incorrect on the db container and since it was within the same data folder as npm container, it was getting its permissions changed by accident.
The mount should be:
- ./mysql:/var/lib/mysql
Ok so for those of you with permission denied for port 80 errors, please try the docker image tag
github-uidgid
.Also, do not specify a
PUID
andPGID
environment variable. This change will revert to pre-2.10 behaviour when these are not set and will run nginx and other processes as root user.Of course it will still have support for user/group if specified.
I think it works with this version. After reming te UID, GID I had some issues, but after that when I first spin up the container after a replace in Portainer I get:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
After a container restart I'm getting:
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
0
usermod: no changes
❯ Checking paths ...
❯ Setting ownership ...
❯ Dynamic resolvers ...
❯ IPv6 ...
Enabling IPV6 in hosts in: /etc/nginx/conf.d
- /etc/nginx/conf.d/default.conf
- /etc/nginx/conf.d/include/assets.conf
- /etc/nginx/conf.d/include/block-exploits.conf
- /etc/nginx/conf.d/include/force-ssl.conf
- /etc/nginx/conf.d/include/ip_ranges.conf
- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
- /etc/nginx/conf.d/include/proxy.conf
- /etc/nginx/conf.d/include/ssl-ciphers.conf
- /etc/nginx/conf.d/include/resolvers.conf
- /etc/nginx/conf.d/production.conf
Enabling IPV6 in hosts in: /data/nginx
- /data/nginx/default_host/site.conf
- /data/nginx/proxy_host/4.conf
- /data/nginx/proxy_host/5.conf
- /data/nginx/proxy_host/3.conf
- /data/nginx/proxy_host/18.conf
- /data/nginx/proxy_host/6.conf
- /data/nginx/proxy_host/2.conf
- /data/nginx/proxy_host/17.conf
- /data/nginx/redirection_host/1.conf
❯ Docker secrets ...
-------------------------------------
_ _ ____ __ __
| \ | | _ \| \/ |
| \| | |_) | |\/| |
| |\ | __/| | | |
|_| \_|_| |_| |_|
-------------------------------------
User ID: 0
Group ID: 0
-------------------------------------
s6-rc: info: service prepare successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service frontend: starting
s6-rc: info: service backend: starting
s6-rc: info: service nginx successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service frontend successfully started
❯ Starting backend ...
s6-rc: info: service legacy-services: starting
❯ Starting nginx ...
s6-rc: info: service legacy-services successfully started
[3/30/2023] [7:44:22 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite
[3/30/2023] [7:44:26 AM] [Migrate ] › ℹ info Current database version: none
[3/30/2023] [7:44:37 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-cloudflare==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') cloudflare
[3/30/2023] [7:44:37 AM] [Setup ] › ℹ info Logrotate Timer initialized
[3/30/2023] [7:44:37 AM] [Setup ] › ℹ info Logrotate completed.
[3/30/2023] [7:44:37 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[3/30/2023] [7:44:37 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[3/30/2023] [7:44:37 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[3/30/2023] [7:44:37 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[3/30/2023] [7:44:37 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized
[3/30/2023] [7:44:37 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[3/30/2023] [7:44:37 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized
[3/30/2023] [7:44:37 AM] [Global ] › ℹ info Backend PID 142 listening on port 3000 ...
[3/30/2023] [7:44:39 AM] [Nginx ] › ℹ info Reloading Nginx
[3/30/2023] [7:44:39 AM] [SSL ] › ℹ info Renew Complete
[3/30/2023] [7:45:38 AM] [Express ] › ⚠ warning invalid signature
On the admin login page I'm seeing version 2.9.22.
github-uidgid boots up fine in Synology and Debian 10 using a fresh container. Later on I will test an upgrade from 2.9.22 using an existing mysql configuration.
I can confirm that github-uidgid works fine on old install that worked on 2.9.22 and was failing on 2.10.0 and 2.10.1, just had to move mysql folder from NPM data folder :)
Ok so for those of you with permission denied for port 80 errors, please try the docker image tag
github-uidgid
.Also, do not specify a
PUID
andPGID
environment variable. This change will revert to pre-2.10 behaviour when these are not set and will run nginx and other processes as root user.Of course it will still have support for user/group if specified.
Where does that leave those of us who DID get this working by defining PUID and PGID as docker ENV Variables. Do I need to UN define/remove them before upgrading or what’s our path moving forward?
The update of a mysql configured 2.9.22 installation to github-uidgid
on Synology was sucessful. No errors at all.
Thanks for testing everyone.
@blaine07 The env vars still work as before if they are specified, so if they work for you, keep using them :)
Thanks for testing everyone.
@blaine07 The env vars still work as before if they are specified, so if they work for you, keep using them :)
I appreciate your time to reply; it’s appreciated.
I realize this is a bumpy road with the changes needed to made to progress forward but THANK YOU for all the hard work you’re putting into this for ALL of us. We appreciate you, your hard work and dedication. Thank YOU!!!! 😀
@jc21 👏 👏 I can confirm that 2.10.2 is working on my Synology NAS when I upgrade from 2.9.22.
No issues when the container starts after updating:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
❯ Checking paths ...
❯ Setting ownership ...
❯ Dynamic resolvers ...
❯ IPv6 ...
Enabling IPV6 in hosts in: /etc/nginx/conf.d
- /etc/nginx/conf.d/default.conf
- /etc/nginx/conf.d/include/assets.conf
- /etc/nginx/conf.d/include/block-exploits.conf
- /etc/nginx/conf.d/include/force-ssl.conf
- /etc/nginx/conf.d/include/ip_ranges.conf
- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
- /etc/nginx/conf.d/include/proxy.conf
- /etc/nginx/conf.d/include/ssl-ciphers.conf
- /etc/nginx/conf.d/include/resolvers.conf
- /etc/nginx/conf.d/production.conf
Enabling IPV6 in hosts in: /data/nginx
- /data/nginx/default_host/site.conf
- /data/nginx/proxy_host/4.conf
- /data/nginx/proxy_host/5.conf
- /data/nginx/proxy_host/3.conf
- /data/nginx/proxy_host/18.conf
- /data/nginx/proxy_host/6.conf
- /data/nginx/proxy_host/2.conf
- /data/nginx/proxy_host/17.conf
- /data/nginx/redirection_host/1.conf
❯ Docker secrets ...
-------------------------------------
_ _ ____ __ __
| \ | | _ \| \/ |
| \| | |_) | |\/| |
| |\ | __/| | | |
|_| \_|_| |_| |_|
-------------------------------------
User ID: 0
Group ID: 0
-------------------------------------
s6-rc: info: service prepare successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service frontend: starting
s6-rc: info: service backend: starting
s6-rc: info: service frontend successfully started
s6-rc: info: service nginx successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service legacy-services: starting
❯ Starting backend ...
❯ Starting nginx ...
s6-rc: info: service legacy-services successfully started
[3/31/2023] [9:23:05 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite
[3/31/2023] [9:23:11 AM] [Migrate ] › ℹ info Current database version: none
[3/31/2023] [9:23:21 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-cloudflare==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') cloudflare
[3/31/2023] [9:23:21 AM] [Setup ] › ℹ info Logrotate Timer initialized
[3/31/2023] [9:23:22 AM] [Setup ] › ℹ info Logrotate completed.
[3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[3/31/2023] [9:23:22 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized
[3/31/2023] [9:23:22 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized
[3/31/2023] [9:23:22 AM] [Global ] › ℹ info Backend PID 135 listening on port 3000 ...
[3/31/2023] [9:23:24 AM] [Nginx ] › ℹ info Reloading Nginx
[3/31/2023] [9:23:24 AM] [SSL ] › ℹ info Renew Complete
On my end I'm still stuck at Setting ownership.
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
id: 'npmuser': no such user
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
❯ Configuring npmuser ...
❯ Checking paths ...
❯ Setting ownership ...
That's on unRAID, using PGID PUID UMASK and DB_SQLITE_FILE
I still cannot get it to work. I get the following:
$ docker-compose up -d && docker logs -f nginxpm Creating network "nginx-proxy-manager_default" with the default driver Creating nginxpm ... done s6-svscan: warning: unable to iopause: Operation not permitted s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted s6-svscan: warning: unable to iopause: Operation not permitted s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
This happens from 2.9.21 up to 2.10.2 with both clean and 'dirty' volumes where 'dirty' just means volumes from 2.9.20.
The latest version that works for me is 2.9.20.
My machine is a raspberry pi:
$ raspinfo System Information Raspberry Pi 3 Model B Plus Rev 1.3 PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" NAME="Raspbian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" Raspberry Pi reference 2021-05-07 Generated using pi-gen, https://github.com/RPi-Distro/pi-gen, dcfd74d7d1fa293065ac6d565711e9ff891fe2b8, stage2
docker-compose.yaml
$ cat docker-compose.yaml version: "3.8" services: app: container_name: nginxpm image: jc21/nginx-proxy-manager:2.9.20 restart: unless-stopped ports:
- 80:80 # Public HTTP Port
- 443:443 # Public HTTPS Port
- 81:81 # Admin Web Port
- 2222:2222 # Incoming port for SSH streaming volumes:
- ./volumes/data:/data
- ./volumes/letsencrypt:/etc/letsencrypt env_file:
- ./nginx-pm.env
environment:
$ cat nginx-pm.env DB_SQLITE_FILE="/data/database.sqlite" DISABLE_IPV6="true"
Any help is appreciated.
@jc21 👏 👏 I can confirm that 2.10.2 is working on my Synology NAS when I upgrade from 2.9.22.
No issues when the container starts after updating:
s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service prepare: starting ❯ Configuring npmuser ... id: 'npmuser': no such user ❯ Checking paths ... ❯ Setting ownership ... ❯ Dynamic resolvers ... ❯ IPv6 ... Enabling IPV6 in hosts in: /etc/nginx/conf.d - /etc/nginx/conf.d/default.conf - /etc/nginx/conf.d/include/assets.conf - /etc/nginx/conf.d/include/block-exploits.conf - /etc/nginx/conf.d/include/force-ssl.conf - /etc/nginx/conf.d/include/ip_ranges.conf - /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf - /etc/nginx/conf.d/include/proxy.conf - /etc/nginx/conf.d/include/ssl-ciphers.conf - /etc/nginx/conf.d/include/resolvers.conf - /etc/nginx/conf.d/production.conf Enabling IPV6 in hosts in: /data/nginx - /data/nginx/default_host/site.conf - /data/nginx/proxy_host/4.conf - /data/nginx/proxy_host/5.conf - /data/nginx/proxy_host/3.conf - /data/nginx/proxy_host/18.conf - /data/nginx/proxy_host/6.conf - /data/nginx/proxy_host/2.conf - /data/nginx/proxy_host/17.conf - /data/nginx/redirection_host/1.conf ❯ Docker secrets ... ------------------------------------- _ _ ____ __ __ | \ | | _ \| \/ | | \| | |_) | |\/| | | |\ | __/| | | | |_| \_|_| |_| |_| ------------------------------------- User ID: 0 Group ID: 0 ------------------------------------- s6-rc: info: service prepare successfully started s6-rc: info: service nginx: starting s6-rc: info: service frontend: starting s6-rc: info: service backend: starting s6-rc: info: service frontend successfully started s6-rc: info: service nginx successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting ❯ Starting backend ... ❯ Starting nginx ... s6-rc: info: service legacy-services successfully started [3/31/2023] [9:23:05 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [3/31/2023] [9:23:11 AM] [Migrate ] › ℹ info Current database version: none [3/31/2023] [9:23:21 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-cloudflare==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') cloudflare [3/31/2023] [9:23:21 AM] [Setup ] › ℹ info Logrotate Timer initialized [3/31/2023] [9:23:22 AM] [Setup ] › ℹ info Logrotate completed. [3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services... [3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [3/31/2023] [9:23:22 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [3/31/2023] [9:23:22 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry... [3/31/2023] [9:23:22 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [3/31/2023] [9:23:22 AM] [Global ] › ℹ info Backend PID 135 listening on port 3000 ... [3/31/2023] [9:23:24 AM] [Nginx ] › ℹ info Reloading Nginx [3/31/2023] [9:23:24 AM] [SSL ] › ℹ info Renew Complete
@jicho Could you share your docker-compose? I tried updating it to my Synology with version 2.10.2 and I still get the following error:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
@antoinedelia I'm not using docker-compose files, I normally enter the docker command on the CLI for the initial creation. After that I do the updates in Portainer by changing the tag.
When I run in Portainer as a stack:
version: '3.8'
services:
app:
image: 'jc21/nginx-proxy-manager:2.10.2'
restart: always
ports:
- '3080:80'
- '3081:81'
- '3443:443'
I first get an error, when I restart the container and when I restart the container everything starts up without any issues.
Just did a test for you, so I din't create any volume mappings.
When I go to the maintenance port I get the weirdest thing:
Strangest detail is that my successful upgrade on my production container I'm seeing version v2.10.2...
And... when I login to the container that is created with the compose above I'm getting this on the CLI:
| \ | | __ _(_)_ __ __ _| _ \ _ __ _____ ___ _| \/ | __ _ _ __ __ _ __ _ ___ _ __
| \| |/ _` | | '_ \\ \/ / |_) | '__/ _ \ \/ / | | | |\/| |/ _` | '_ \ / _` |/ _` |/ _ \ '__|
| |\ | (_| | | | | |> <| __/| | | (_) > <| |_| | | | | (_| | | | | (_| | (_| | __/ |
|_| \_|\__, |_|_| |_/_/\_\_| |_| \___/_/\_\\__, |_| |_|\__,_|_| |_|\__,_|\__, |\___|_|
|___/ |___/ |___/
Version 2.10.2 (86ddd9c) 2023-03-30 23:54:10 UTC, OpenResty 1.21.4.1, debian 10 (buster), Certbot certbot 2.4.0
Base: debian:buster-slim, linux/amd64
Certbot: jc21/nginx-full:latest, linux/amd64
Node: jc21/nginx-full:certbot, linux/amd64
Ah... after a forced reload of the page I'm seeing 2.10.2 on the login page of my compose test. Grrr... browser cache :)
Thanks a lot @jicho! Restarting the container worked!
Unfortunately, I ended up having another issue that is mentioned here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2750
Hopefully this will also get fixed!
I've been watching this thread closely over the last few days, as I had the same issue with the latest version breaking. I've just tried using the latest tag (and also the specific version 2.10.2 tag) and recreated my container, but I'm still getting the same issue:
nginx-proxy-manager | s6-rc: info: service s6rc-oneshot-runner: starting
nginx-proxy-manager | s6-rc: info: service s6rc-oneshot-runner successfully started
nginx-proxy-manager | s6-rc: info: service fix-attrs: starting
nginx-proxy-manager | s6-rc: info: service fix-attrs successfully started
nginx-proxy-manager | s6-rc: info: service legacy-cont-init: starting
nginx-proxy-manager | s6-rc: info: service legacy-cont-init successfully started
nginx-proxy-manager | s6-rc: info: service prepare: starting
nginx-proxy-manager | ❯ Configuring npmuser ...
nginx-proxy-manager | id: 'npmuser': no such user
nginx-proxy-manager | ❯ Checking paths ...
nginx-proxy-manager | ❯ Setting ownership ...
nginx-proxy-manager | chown: changing ownership of '/etc/nginx/conf.d/include/cac_auth.conf': Read-only file system
nginx-proxy-manager | s6-rc: warning: unable to start service prepare: command exited 1
nginx-proxy-manager | /run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
nginx-proxy-manager | s6-rc: info: service legacy-cont-init: stopping
nginx-proxy-manager | s6-rc: info: service legacy-cont-init successfully stopped
nginx-proxy-manager | s6-rc: info: service fix-attrs: stopping
nginx-proxy-manager | s6-rc: info: service fix-attrs successfully stopped
nginx-proxy-manager | s6-rc: info: service s6rc-oneshot-runner: stopping
nginx-proxy-manager | s6-rc: info: service s6rc-oneshot-runner successfully stopped
nginx-proxy-manager exited with code 0
This is my current docker-compose.yml
:
version: '3'
services:
nginx-proxy-manager:
container_name: nginx-proxy-manager
#image: 'jc21/nginx-proxy-manager:latest' ## Broken as of 27-03-23
#image: 'jc21/nginx-proxy-manager:2.9.22' ## Last working version
image: 'jc21/nginx-proxy-manager:2.10.2' ## Test version
restart: unless-stopped
networks:
nginx-proxy-manager:
ipv4_address: 172.23.0.2
ports:
- '2880:80'
- '2881:81'
- '2443:443'
environment:
TZ: 'Europe/London'
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
volumes:
- ./data:/data ## Required
- ./letsencrypt:/etc/letsencrypt ## Required
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "/bin/check-health"]
interval: 10s
timeout: 3s
db:
image: 'jc21/mariadb-aria:latest'
container_name: mariadb-npm
restart: unless-stopped
networks:
nginx-proxy-manager:
ipv4_address: 172.23.0.3
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
volumes:
- ./data/mysql:/var/lib/mysql
networks:
nginx-proxy-manager:
driver: bridge
name: nginx-proxy-manager
ipam:
driver: default
config:
- subnet: 172.23.0.0/16
gateway: 172.23.0.1
ip_range: 127.23.0.1/24
The above works just fine on version 2.9.22, but not on latest or 2.10.2.
Running on a Synology NAS (DS718+ on latest DSM). Docker version 20.10.3, build 55f0773 if that helps.
@antoinedelia No problem :)
Regarding #2750 , port 80 on the host system (your NAS) is always in use. Synology can't free this port due to their own (Nginx) logic :(
Maybe your container wants to use the host port 80? As a test you might changes the access ports of the container. I'm just thinking out loud!
I have re-install with the tag 2.10.2 and it works!!!
This is still broken for me on latest.
On initial run of the container, I got:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
If I restart, I get make progress but still fail to bind a port:
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
911
usermod: no changes
❯ Checking paths ...
❯ Setting ownership ...
❯ Dynamic resolvers ...
❯ IPv6 ...
Enabling IPV6 in hosts in: /etc/nginx/conf.d
- /etc/nginx/conf.d/default.conf
- /etc/nginx/conf.d/include/assets.conf
- /etc/nginx/conf.d/include/block-exploits.conf
- /etc/nginx/conf.d/include/force-ssl.conf
- /etc/nginx/conf.d/include/ip_ranges.conf
- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
- /etc/nginx/conf.d/include/proxy.conf
- /etc/nginx/conf.d/include/ssl-ciphers.conf
- /etc/nginx/conf.d/include/resolvers.conf
- /etc/nginx/conf.d/production.conf
Enabling IPV6 in hosts in: /data/nginx
- /data/nginx/default_host/site.conf
- /data/nginx/proxy_host/4.conf
- /data/nginx/proxy_host/2.conf
- /data/nginx/proxy_host/5.conf
- /data/nginx/proxy_host/7.conf
- /data/nginx/proxy_host/9.conf
- /data/nginx/proxy_host/6.conf
❯ Docker secrets ...
-------------------------------------
_ _ ____ __ __
| \ | | _ \| \/ |
| \| | |_) | |\/| |
| |\ | __/| | | |
|_| \_|_| |_| |_|
-------------------------------------
User UID: 911
User GID: 911
-------------------------------------
s6-rc: info: service prepare successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service frontend: starting
s6-rc: info: service backend: starting
s6-rc: info: service backend successfully started
s6-rc: info: service nginx successfully started
s6-rc: info: service frontend successfully started
s6-rc: info: service legacy-services: starting
❯ Starting nginx ...
❯ Starting backend ...
s6-rc: info: service legacy-services successfully started
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
❯ Starting nginx ...
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
[3/31/2023] [8:30:55 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite
[3/31/2023] [8:30:55 AM] [Global ] › ℹ info Creating a new JWT key pair...
❯ Starting nginx ...
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
❯ Starting nginx ...
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
If I delete that container and repeat my docker run command with the 2.9.22
tag the container comes up successfully.
latest
is still broken. github-uidgid
works.
Today I have updated OpenMediaVault (x86) and when services came back again, NPM github-uidgid
failed with the usual s6-sudoc error. Reverted to 2.9.22 and it worked again.
Same problem, rollback to 2.9.22 and works again.
I was not able to reproduce the issue on OpenMediaVault 6 (Debian 11).
I have successfully testet latest
(2.10.2) on Synology DSM7, omv 5 (Debian 10) and omv 6 (Debian 11).
2.10.2
seems to have resolved the issue for me on Synology DSM7.
docker run -d --name=nginx_proxy_manager \
--network=synobridge \
-e TZ=America/New_York \
-e PUID=0 \
-e PGID=0 \
-p 8341:80 \
-p 81:81 \
-p 8766:443 \
-v /volume1/docker/npm/config.json:/app/config/production.json \
-v /volume1/docker/npm/data:/data \
-v /volume1/docker/npm/letsencrypt:/etc/letsencrypt \
--restart unless-stopped \
jc21/nginx-proxy-manager:2.10.2
EDIT: Actually I'm seeing issues on server boot up. Not sure if this is related or not. Container starts but nginx does not work.
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
0
usermod: no changes
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
If I stop and start the container after boot, it works.
EDIT: Confirmed I do not see this behavior with 2.9.22
. It works fine.
DSM7 here still doesn't come up (updated docker containers) and fresh containerdeployment
It's saying "npmuser" non existend etc.. And then justs s6 fatal errors
Rollback 2.9.22 flawless and up again
I had the same issue with 2.10.0
on a Synology DSM 6.
2.10.2
is now working for me. Thanks
I had same issue with 2.10.0 & 2.10.1, forced to downgrade version to 2.9.22. 2.10.2 fixed the issue for me (rasp arm7), thank you
never mind, 3rd reboot and system came up and online on 2.10.2
never mind, 3rd reboot and system came up and online on 2.10.2
Are you saying that after 2.10.2 upgrade you needed to restart a few times but then everything worked normally? I'm still pinned to 2.9.22 but would prefer to get back on stable
if it is indeed stable now.
That's correct @barndawgie !
Thanks, the latest version works too.
Updating the Raspberry Pi from Buster to Bullseye fixed it for me. I still get this in the logs
❯ Configuring npmuser ... id: 'npmuser': no such user
However, the service comes up and works as expected.
I can confirm that after a container restart latest
seems to be working for me as well.
In my case, on Orange Pi 3 LTS with Armbian this is what happens with :latest
doesn't work for me :/ any ideas?
❯ Configuring npmuser ...
id: 'npmuser': no such user
usermod: group 'xxx' does not exist
s6-rc: warning: unable to start service prepare: command exited 1
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
reboots didn't help. thx!
Same here on a Synology NAS
* if I reboot, NPM fails to restart giving the "s6-rc: fatal: timed out" error * but if I manually restart it from Portainer, it works
It is a "cold boot" problem.
still happening, after server down it just don't come back up (cold boot) problem indeed.
Today I have updated Open Media Vault and Docker to latest on my Orange Pi 3 LTS and NPM failed as usual, but this time restarting didn't help. Tried several times until AGAIN I switched back to 2.9.22, which works fine. Seems something in new Docker worsened NPM stability.
Not sure if I got the same issue but when I have a PGID
which is different from PUID
, a cold start of version 2.10.2 didn't work.
docker-compose.yml
services:
npm:
image: jc21/nginx-proxy-manager:2.10.2
container_name: npm
environment:
- PGID=999
- PUID=1001
volumes:
- ./data:/data
- ./etc/letsencrypt:/etc/letsencrypt
ports:
- 10080:80
- 10081:81
- 10443:443
restart: always
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Configuring npmuser ...
id: 'npmuser': no such user
usermod: group '999' does not exist
s6-rc: warning: unable to start service prepare: command exited 1
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
I've been using these PGID and PUID for my other Dockers just fine and only got this issue with NPM.
My workaround fix is to set PGID to the same with PUID.
environment:
- PGID=1001
- PUID=1001
My environment:
Unbelievable, this issue has not been resolved for over a month.
I noticed that the prompt for timeout appeared very very very quickly (less than 0.5s).
❯ Configuring npmuser ... 0 usermod: no changes ❯ Checking paths ... ❯ Setting ownership ... s6-rc: fatal: timed out s6-sudoc: fatal: unable to get exit status from server: Operation timed out /run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught->logs/current if you have in-container logging) for more information.
It seems that this problem occurs more often in NAS or embedded systems (low performance).
Some people above also said that restarting the container has a probability to start normally.
And I found that if I stop other I/O-intensive and CPU-intensive programs/processes, the NPM container can start normally almost every time.
If I'm doing a disk sync or a disk scan, the npm container won't start properly almost every time.
So, can you confirm if the timeout settings of init-scripts is not set properly? @jc21
@befantasy I've been away on my honeymoon and it was great thanks for asking.
FWIW this was meant to be fixed and is fixed on all the architectures that I have access to. I do not have access to an OrangePi or Synology setup however which makes things very difficult.
As for S6 scripts, they don't have a timeout set by default or by me, that error message is incorrect and misleading. I doubt the disk access is a contributing factor but that ownership script can be heavy depending on the filesystems so I can't rule it out.
I've created a docker image that has verbose output of the s6 scripts so we can work out exactly where things are failing.
For those affected, please use this docker image and post the previous 10 lines or so prior to the error:
jc21/nginx-proxy-manager:github-s6-verbose
Also mention:
Hi, congrats on the wedding @jc21.
Here's my results. The system is amd64, nowhere near slow (unRAID build on 3700x running in cache). Running with or without PUID/PGID doesn't change anything for me, except for the actual puid set for npmuser.
I am not doing anything more than vanilla, just running the docker run command included.
I did a "first run" followed by 3 restarts in each case. I don't think subsequent restarts do anything more. The container never starts successfully.
docker run
-d
--name='Nginx-Proxy-Manager-Official'
--net='proxynet'
-e TZ="Europe/Paris"
-e HOST_OS="Unraid"
-e HOST_HOSTNAME="unRAID"
-e HOST_CONTAINERNAME="Nginx-Proxy-Manager-Official"
-e 'DB_SQLITE_FILE'='/data/database.sqlite'
-e 'PUID'='99'
-e 'PGID'='100'
-e 'UMASK'='022'
-l net.unraid.docker.managed=dockerman
-l net.unraid.docker.webui='http://[IP]:[PORT:81]'
-l net.unraid.docker.icon='https://nginxproxymanager.com/icon.png'
-p '81:81/tcp'
-p '1180:80/tcp'
-p '11443:443/tcp'
-p '3001:3000/tcp'
-v '/mnt/user/appdata/Nginx-Proxy-Manager-Official/data':'/data':'rw'
-v '/mnt/user/appdata/Nginx-Proxy-Manager-Official/letsencrypt':'/etc/letsencrypt':'rw'
-v '/tmp/Nginx-Proxy-Manager-Official/var/log':'/var/log':'rw'
--memory=1G
--no-healthcheck 'jc21/nginx-proxy-manager:github-s6-verbose'
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
id: 'npmuser': no such user
++ useradd -o -u 99 -U -d /tmp/npmuserhome -s /bin/false npmuser
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
❯ Configuring npmuser ...
❯ Checking paths ...
❯ Setting ownership ...
the system hangs infinitely. Restart.
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
id: 'npmuser': no such user
++ useradd -o -u 99 -U -d /tmp/npmuserhome -s /bin/false npmuser
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 99 npmuser
usermod: no changes
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
❯ Configuring npmuser ...
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
99
❯ Checking paths ...
❯ Setting ownership ...
Hanging. Restart.
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 99 npmuser
usermod: no changes
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 99 npmuser
usermod: no changes
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
❯ Configuring npmuser ...
99
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
99
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Hanging. Restart.
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 99 npmuser
usermod: no changes
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 99 npmuser
usermod: no changes
++ usermod -G 100 npmuser
++ groupmod -o -g 100 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 99:100 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 99:100 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
❯ Configuring npmuser ...
99
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
99
❯ Checking paths ...
❯ Setting ownership ...
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
s6-rc: warning: unable to start service prepare: command exited 111
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Hanging. Now trying without PUID/PGID/UMASK.
docker run
-d
--name='Nginx-Proxy-Manager-Official'
--net='proxynet'
-e TZ="Europe/Paris"
-e HOST_OS="Unraid"
-e HOST_HOSTNAME="unRAID"
-e HOST_CONTAINERNAME="Nginx-Proxy-Manager-Official"
-e 'DB_SQLITE_FILE'='/data/database.sqlite'
-l net.unraid.docker.managed=dockerman
-l net.unraid.docker.webui='http://[IP]:[PORT:81]'
-l net.unraid.docker.icon='https://nginxproxymanager.com/icon.png'
-p '81:81/tcp'
-p '1180:80/tcp'
-p '11443:443/tcp'
-p '3001:3000/tcp'
-v '/mnt/user/appdata/Nginx-Proxy-Manager-Official/data':'/data':'rw'
-v '/mnt/user/appdata/Nginx-Proxy-Manager-Official/letsencrypt':'/etc/letsencrypt':'rw'
-v '/tmp/Nginx-Proxy-Manager-Official/var/log':'/var/log':'rw'
--memory=1G
--no-healthcheck 'jc21/nginx-proxy-manager:github-s6-verbose'
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
id: 'npmuser': no such user
++ useradd -o -u 0 -U -d /tmp/npmuserhome -s /bin/false npmuser
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
❯ Configuring npmuser ...
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
id: 'npmuser': no such user
++ useradd -o -u 0 -U -d /tmp/npmuserhome -s /bin/false npmuser
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 0 npmuser
usermod: no changes
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
❯ Configuring npmuser ...
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
0
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 0 npmuser
usermod: no changes
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 0 npmuser
usermod: no changes
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
❯ Configuring npmuser ...
0
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
0
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: fatal: timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 0 npmuser
usermod: no changes
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
s6-rc: fatal: timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
++ log_info 'Configuring npmuser ...'
++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
++ id -u npmuser
++ usermod -u 0 npmuser
usermod: no changes
++ usermod -G 0 npmuser
++ groupmod -o -g 0 npmuser
++ mkdir -p /tmp/npmuserhome
++ chown -R 0:0 /tmp/npmuserhome
+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
++ set -e
++ set -x
++ log_info 'Checking paths ...'
++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
++ '[' '!' -d /data ']'
++ '[' '!' -d /etc/letsencrypt ']'
++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
++ touch /var/log/nginx/error.log
++ chmod 777 /var/log/nginx/error.log
++ chmod -R 777 /var/cache/nginx
++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
++ set -e
++ set -x
++ log_info 'Setting ownership ...'
++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
++ chown root /tmp/nginx
++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
❯ Configuring npmuser ...
0
❯ Checking paths ...
❯ Setting ownership ...
❯ Configuring npmuser ...
0
❯ Checking paths ...
❯ Setting ownership ...
s6-rc: fatal: timed out
s6-sudoc: fatal: unable to get exit status from server: Operation timed out
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
The container never starts successfully.
Orange Pi 3 LTS here, tried github-s6-verbose variant, it won't start as well after reboot. My compose looks like this: EDIT :latest is just what i usually use, for this test I've changed it to :github-s6-verbose
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '8088:80'
- '81:81'
- '4443:443'
volumes:
- /srv/dev-disk-by-uuid-aeae213f-8ce4-405c-9d96-db90e69c28f8/Config/nginx-proxy/data:/data
- /srv/dev-disk-by-uuid-aeae213f-8ce4-405c-9d96-db90e69c28f8/Config/nginx-proxy/letsencrypt:/etc/letsencrypt
Policy is set to Unless Stopped.
Log after reboot of OMV6:
2023-05-03T08:13:22.933547051Z
2023-05-03T08:13:22.933559677Z at ChildProcess.exithandler (node:child_process:402:12)
2023-05-03T08:13:22.933611469Z at ChildProcess.emit (node:events:513:28)
2023-05-03T08:13:22.933627510Z at maybeClose (node:internal/child_process:1100:16)
2023-05-03T08:13:22.933640594Z at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)
2023-05-03T08:13:22.934249640Z [5/3/2023] [8:13:22 AM] [IP Ranges] › ? info Fetching IP Ranges from online services...
2023-05-03T08:13:22.935998736Z [5/3/2023] [8:13:22 AM] [IP Ranges] › ? info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
2023-05-03T08:13:23.281047675Z [5/3/2023] [8:13:23 AM] [IP Ranges] › ? info Fetching https://www.cloudflare.com/ips-v4
2023-05-03T08:13:23.380733934Z [5/3/2023] [8:13:23 AM] [IP Ranges] › ? info Fetching https://www.cloudflare.com/ips-v6
2023-05-03T08:13:23.552052083Z [5/3/2023] [8:13:23 AM] [SSL ] › ? info Let's Encrypt Renewal Timer initialized
2023-05-03T08:13:23.554681310Z [5/3/2023] [8:13:23 AM] [SSL ] › ? info Renewing SSL certs close to expiry...
2023-05-03T08:13:23.578792692Z [5/3/2023] [8:13:23 AM] [IP Ranges] › ? info IP Ranges Renewal Timer initialized
2023-05-03T08:13:23.589617728Z [5/3/2023] [8:13:23 AM] [Global ] › ? info Backend PID 153 listening on port 3000 ...
2023-05-03T08:13:27.278160753Z [5/3/2023] [8:13:27 AM] [Nginx ] › ? info Reloading Nginx
2023-05-03T08:13:27.727145940Z [5/3/2023] [8:13:27 AM] [SSL ] › ? info Renew Complete
2023-05-03T08:13:48.586731238Z [5/3/2023] [8:13:48 AM] [Express ] › ? warning invalid signature
2023-05-03T08:14:14.398059395Z s6-rc: info: service legacy-services: stopping
2023-05-03T08:14:14.636890488Z s6-rc: info: service legacy-services successfully stopped
2023-05-03T08:14:14.637297741Z s6-rc: info: service nginx: stopping
2023-05-03T08:14:14.638912711Z s6-rc: info: service frontend: stopping
2023-05-03T08:14:14.643514244Z s6-rc: info: service backend: stopping
2023-05-03T08:14:14.660908869Z s6-rc: info: service frontend successfully stopped
2023-05-03T08:14:15.062773342Z s6-rc: info: service backend successfully stopped
2023-05-03T08:14:15.351409584Z s6-rc: info: service nginx successfully stopped
2023-05-03T08:14:15.352019964Z s6-rc: info: service prepare: stopping
2023-05-03T08:14:15.355145945Z s6-rc: info: service prepare successfully stopped
2023-05-03T08:14:15.355475697Z s6-rc: info: service legacy-cont-init: stopping
2023-05-03T08:14:15.370218303Z s6-rc: info: service legacy-cont-init successfully stopped
2023-05-03T08:14:15.371037351Z s6-rc: info: service fix-attrs: stopping
2023-05-03T08:14:15.378066276Z s6-rc: info: service fix-attrs successfully stopped
2023-05-03T08:14:15.378606072Z s6-rc: info: service s6rc-oneshot-runner: stopping
2023-05-03T08:14:15.573874143Z s6-rc: info: service s6rc-oneshot-runner successfully stopped
2023-05-03T08:14:15.591611145Z [5/3/2023] [8:14:15 AM] [Global ] › ? info PID 153 received SIGTERM
2023-05-03T08:14:15.592036273Z [5/3/2023] [8:14:15 AM] [Global ] › ? info Stopping.
2023-05-03T08:15:41.631405877Z s6-rc: info: service s6rc-oneshot-runner: starting
2023-05-03T08:15:41.917144742Z s6-rc: info: service s6rc-oneshot-runner successfully started
2023-05-03T08:15:41.996656778Z s6-rc: info: service fix-attrs: starting
2023-05-03T08:15:43.010536011Z s6-rc: info: service fix-attrs successfully started
2023-05-03T08:15:43.011203849Z s6-rc: info: service legacy-cont-init: starting
2023-05-03T08:15:43.089068539Z s6-rc: info: service legacy-cont-init successfully started
2023-05-03T08:15:43.089206749Z s6-rc: info: service prepare: starting
2023-05-03T08:15:44.335801380Z ++ log_info 'Configuring npmuser ...'
2023-05-03T08:15:44.336047632Z ++ echo -e '\E[1;34m? \E[1;36mConfiguring npmuser ...\E[0m'
2023-05-03T08:15:44.336100007Z ++ id -u npmuser
2023-05-03T08:15:44.335801380Z ? Configuring npmuser ...
2023-05-03T08:15:44.342315302Z 0
2023-05-03T08:15:44.343253226Z ++ usermod -u 0 npmuser
2023-05-03T08:15:45.527394653Z usermod: no changes
2023-05-03T08:15:45.701066747Z ++ usermod -G 0 npmuser
2023-05-03T08:15:45.737616971Z ++ groupmod -o -g 0 npmuser
2023-05-03T08:15:46.527514329Z s6-rc: fatal: timed out
2023-05-03T08:15:46.529774803Z s6-sudoc: fatal: unable to get exit status from server: Operation timed out
2023-05-03T08:15:46.690481803Z /run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
Then if I restart the container, it starts:
2023-05-03T08:19:00.492208508Z s6-rc: info: service legacy-cont-init: stopping
2023-05-03T08:19:00.547040141Z s6-rc: info: service legacy-cont-init successfully stopped
2023-05-03T08:19:00.548053815Z s6-rc: info: service fix-attrs: stopping
2023-05-03T08:19:00.553519895Z s6-rc: info: service fix-attrs successfully stopped
2023-05-03T08:19:00.558752515Z s6-rc: info: service s6rc-oneshot-runner: stopping
2023-05-03T08:19:00.596086859Z s6-rc: info: service s6rc-oneshot-runner successfully stopped
2023-05-03T08:19:09.746270147Z s6-rc: info: service s6rc-oneshot-runner: starting
2023-05-03T08:19:09.825489534Z s6-rc: info: service s6rc-oneshot-runner successfully started
2023-05-03T08:19:09.826045496Z s6-rc: info: service fix-attrs: starting
2023-05-03T08:19:09.850843878Z s6-rc: info: service fix-attrs successfully started
2023-05-03T08:19:09.851303881Z s6-rc: info: service legacy-cont-init: starting
2023-05-03T08:19:09.874864546Z s6-rc: info: service legacy-cont-init successfully started
2023-05-03T08:19:09.875255298Z s6-rc: info: service prepare: starting
2023-05-03T08:19:09.969539874Z ++ log_info 'Configuring npmuser ...'
2023-05-03T08:19:09.969759667Z ++ echo -e '\E[1;34m? \E[1;36mConfiguring npmuser ...\E[0m'
2023-05-03T08:19:09.969908293Z ? Configuring npmuser ...
2023-05-03T08:19:09.971848931Z ++ id -u npmuser
2023-05-03T08:19:09.976592964Z 0
2023-05-03T08:19:09.978063641Z ++ usermod -u 0 npmuser
2023-05-03T08:19:10.025175012Z usermod: no changes
2023-05-03T08:19:10.030681926Z ++ usermod -G 0 npmuser
2023-05-03T08:19:10.048152673Z ++ groupmod -o -g 0 npmuser
2023-05-03T08:19:10.443314183Z ++ mkdir -p /tmp/npmuserhome
2023-05-03T08:19:10.532077053Z ++ chown -R 0:0 /tmp/npmuserhome
2023-05-03T08:19:10.541174450Z + . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
2023-05-03T08:19:10.542594751Z ++ set -e
2023-05-03T08:19:10.542678502Z ++ set -x
2023-05-03T08:19:10.542695085Z ++ log_info 'Checking paths ...'
2023-05-03T08:19:10.542754169Z ? Checking paths ...
2023-05-03T08:19:10.542765794Z ++ echo -e '\E[1;34m? \E[1;36mChecking paths ...\E[0m'
2023-05-03T08:19:10.542851128Z ++ '[' '!' -d /data ']'
2023-05-03T08:19:10.543059630Z ++ '[' '!' -d /etc/letsencrypt ']'
2023-05-03T08:19:10.543374215Z ++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
2023-05-03T08:19:10.602807714Z ++ touch /var/log/nginx/error.log
2023-05-03T08:19:10.611335023Z ++ chmod 777 /var/log/nginx/error.log
2023-05-03T08:19:10.617675192Z ++ chmod -R 777 /var/cache/nginx
2023-05-03T08:19:10.633829680Z ++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
2023-05-03T08:19:10.637724499Z + . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
2023-05-03T08:19:10.638517755Z ++ set -e
2023-05-03T08:19:10.638693131Z ++ set -x
2023-05-03T08:19:10.638818882Z ++ log_info 'Setting ownership ...'
2023-05-03T08:19:10.638961549Z ++ echo -e '\E[1;34m? \E[1;36mSetting ownership ...\E[0m'
2023-05-03T08:19:10.639002633Z ? Setting ownership ...
2023-05-03T08:19:10.639133426Z ++ chown root /tmp/nginx
2023-05-03T08:19:10.643074036Z ++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
2023-05-03T08:19:10.671255567Z ++ chown -R 0:0 /etc/nginx/nginx /etc/nginx/nginx.conf /etc/nginx/conf.d
2023-05-03T08:19:10.682718105Z + . /etc/s6-overlay/s6-rc.d/prepare/40-dynamic.sh
2023-05-03T08:19:10.683497736Z ++ set -e
2023-05-03T08:19:10.683581069Z ++ set -x
2023-05-03T08:19:10.683668403Z ++ log_info 'Dynamic resolvers ...'
2023-05-03T08:19:10.683735654Z ++ echo -e '\E[1;34m? \E[1;36mDynamic resolvers ...\E[0m'
2023-05-03T08:19:10.683750279Z ? Dynamic resolvers ...
2023-05-03T08:19:10.685709334Z +++ echo ''
2023-05-03T08:19:10.686168629Z +++ tr '[:upper:]' '[:lower:]'
2023-05-03T08:19:10.691963628Z ++ DISABLE_IPV6=
2023-05-03T08:19:10.692286255Z ++ '[' '' == true ']'
2023-05-03T08:19:10.692329964Z ++ '[' '' == on ']'
2023-05-03T08:19:10.692454256Z ++ '[' '' == 1 ']'
2023-05-03T08:19:10.692476090Z ++ '[' '' == yes ']'
2023-05-03T08:19:10.693637640Z +++ awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf
2023-05-03T08:19:10.712891316Z ++ echo resolver '127.0.0.11 valid=10s;'
2023-05-03T08:19:10.713687946Z + . /etc/s6-overlay/s6-rc.d/prepare/50-ipv6.sh
2023-05-03T08:19:10.730021685Z ++ set -e
2023-05-03T08:19:10.730147353Z ++ set -x
2023-05-03T08:19:10.730164728Z ++ log_info 'IPv6 ...'
2023-05-03T08:19:10.730376730Z ++ echo -e '\E[1;34m? \E[1;36mIPv6 ...\E[0m'
2023-05-03T08:19:10.730390646Z ? IPv6 ...
2023-05-03T08:19:10.732203076Z +++ echo ''
2023-05-03T08:19:10.732569412Z +++ tr '[:upper:]' '[:lower:]'
2023-05-03T08:19:10.735744725Z ++ DISABLE_IPV6=
2023-05-03T08:19:10.736203020Z ++ process_folder /etc/nginx/conf.d
2023-05-03T08:19:10.737382070Z +++ find /etc/nginx/conf.d -type f -name '*.conf'
2023-05-03T08:19:10.756394036Z ++ FILES='/etc/nginx/conf.d/include/proxy.conf
2023-05-03T08:19:10.756497412Z /etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T08:19:10.756513871Z /etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T08:19:10.756527204Z /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T08:19:10.756540746Z /etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T08:19:10.756553704Z /etc/nginx/conf.d/include/assets.conf
2023-05-03T08:19:10.756555412Z Enabling IPV6 in hosts in: /etc/nginx/conf.d
2023-05-03T08:19:10.756567829Z /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T08:19:10.756617038Z /etc/nginx/conf.d/include/resolvers.conf
2023-05-03T08:19:10.756631996Z /etc/nginx/conf.d/default.conf
2023-05-03T08:19:10.756645788Z /etc/nginx/conf.d/production.conf'
2023-05-03T08:19:10.756658705Z ++ SED_REGEX=
2023-05-03T08:19:10.756672372Z ++ '[' '' == true ']'
2023-05-03T08:19:10.756685205Z ++ '[' '' == on ']'
2023-05-03T08:19:10.756698122Z ++ '[' '' == 1 ']'
2023-05-03T08:19:10.756714455Z ++ '[' '' == yes ']'
2023-05-03T08:19:10.756785456Z ++ echo 'Enabling IPV6 in hosts in: /etc/nginx/conf.d'
2023-05-03T08:19:10.756799914Z ++ SED_REGEX='s/^(\s*)#listen \[::\]/\1listen [::]/g'
2023-05-03T08:19:10.757162500Z ++ for FILE in $FILES
2023-05-03T08:19:10.757252917Z ++ echo '- /etc/nginx/conf.d/include/proxy.conf'
2023-05-03T08:19:10.757269209Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/proxy.conf
2023-05-03T08:19:10.757162500Z - /etc/nginx/conf.d/include/proxy.conf
2023-05-03T08:19:10.775925423Z ++ for FILE in $FILES
2023-05-03T08:19:10.776049965Z ++ echo '- /etc/nginx/conf.d/include/block-exploits.conf'
2023-05-03T08:19:10.775925423Z - /etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T08:19:10.776066924Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T08:19:10.804135995Z - /etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T08:19:10.804135995Z ++ for FILE in $FILES
2023-05-03T08:19:10.804266746Z ++ echo '- /etc/nginx/conf.d/include/ip_ranges.conf'
2023-05-03T08:19:10.804281204Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T08:19:10.821258448Z - /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T08:19:10.821258448Z ++ for FILE in $FILES
2023-05-03T08:19:10.821390324Z ++ echo '- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf'
2023-05-03T08:19:10.821533158Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T08:19:10.839243073Z ++ for FILE in $FILES
2023-05-03T08:19:10.839361699Z ++ echo '- /etc/nginx/conf.d/include/force-ssl.conf'
2023-05-03T08:19:10.839377324Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T08:19:10.839243032Z - /etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T08:19:10.855182643Z - /etc/nginx/conf.d/include/assets.conf
2023-05-03T08:19:10.855199477Z ++ for FILE in $FILES
2023-05-03T08:19:10.855327561Z ++ echo '- /etc/nginx/conf.d/include/assets.conf'
2023-05-03T08:19:10.855342894Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/assets.conf
2023-05-03T08:19:10.861154643Z - /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T08:19:10.861154435Z ++ for FILE in $FILES
2023-05-03T08:19:10.861294561Z ++ echo '- /etc/nginx/conf.d/include/ssl-ciphers.conf'
2023-05-03T08:19:10.861309811Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T08:19:10.866794432Z - /etc/nginx/conf.d/include/resolvers.conf
2023-05-03T08:19:10.866794391Z ++ for FILE in $FILES
2023-05-03T08:19:10.866939350Z ++ echo '- /etc/nginx/conf.d/include/resolvers.conf'
2023-05-03T08:19:10.867006142Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/resolvers.conf
2023-05-03T08:19:10.872003761Z ++ for FILE in $FILES
2023-05-03T08:19:10.872140762Z ++ echo '- /etc/nginx/conf.d/default.conf'
2023-05-03T08:19:10.872027219Z - /etc/nginx/conf.d/default.conf
2023-05-03T08:19:10.872157720Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/default.conf
2023-05-03T08:19:10.878074720Z - /etc/nginx/conf.d/production.conf
2023-05-03T08:19:10.878074595Z ++ for FILE in $FILES
2023-05-03T08:19:10.878199804Z ++ echo '- /etc/nginx/conf.d/production.conf'
2023-05-03T08:19:10.878214846Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/production.conf
2023-05-03T08:19:10.884042553Z ++ chown -R 0:0 /etc/nginx/conf.d
2023-05-03T08:19:10.888009206Z ++ process_folder /data/nginx
2023-05-03T08:19:10.889115130Z +++ find /data/nginx -type f -name '*.conf'
2023-05-03T08:19:10.893517494Z ++ FILES='/data/nginx/proxy_host/7.conf
2023-05-03T08:19:10.893614620Z /data/nginx/proxy_host/14.conf
2023-05-03T08:19:10.893630787Z /data/nginx/proxy_host/4.conf
2023-05-03T08:19:10.893643995Z /data/nginx/proxy_host/6.conf
2023-05-03T08:19:10.893656537Z /data/nginx/proxy_host/21.conf
2023-05-03T08:19:10.893668912Z /data/nginx/proxy_host/20.conf
2023-05-03T08:19:10.893681620Z /data/nginx/proxy_host/1.conf
2023-05-03T08:19:10.893693954Z /data/nginx/proxy_host/19.conf
2023-05-03T08:19:10.893706537Z /data/nginx/proxy_host/3.conf
2023-05-03T08:19:10.893719037Z /data/nginx/proxy_host/15.conf
2023-05-03T08:19:10.893731787Z /data/nginx/proxy_host/5.conf
2023-05-03T08:19:10.893744412Z /data/nginx/proxy_host/12.conf
2023-05-03T08:19:10.893756871Z /data/nginx/proxy_host/9.conf
2023-05-03T08:19:10.893769288Z /data/nginx/proxy_host/2.conf
2023-05-03T08:19:10.893781746Z /data/nginx/proxy_host/8.conf
2023-05-03T08:19:10.893794204Z /data/nginx/proxy_host/24.conf
2023-05-03T08:19:10.893806830Z /data/nginx/proxy_host/11.conf
2023-05-03T08:19:10.893819038Z /data/nginx/proxy_host/13.conf
2023-05-03T08:19:10.893831538Z /data/nginx/proxy_host/16.conf
2023-05-03T08:19:10.893844080Z /data/nginx/proxy_host/22.conf
2023-05-03T08:19:10.893856705Z /data/nginx/proxy_host/23.conf
2023-05-03T08:19:10.893869163Z /data/nginx/proxy_host/18.conf
2023-05-03T08:19:10.893881538Z /data/nginx/proxy_host/10.conf
2023-05-03T08:19:10.893893913Z /data/nginx/proxy_host/17.conf'
2023-05-03T08:19:10.893906330Z ++ SED_REGEX=
2023-05-03T08:19:10.893918664Z ++ '[' '' == true ']'
2023-05-03T08:19:10.894253291Z ++ '[' '' == on ']'
2023-05-03T08:19:10.894343417Z ++ '[' '' == 1 ']'
2023-05-03T08:19:10.894292541Z Enabling IPV6 in hosts in: /data/nginx
2023-05-03T08:19:10.894360750Z ++ '[' '' == yes ']'
2023-05-03T08:19:10.894411709Z ++ echo 'Enabling IPV6 in hosts in: /data/nginx'
2023-05-03T08:19:10.894427084Z ++ SED_REGEX='s/^(\s*)#listen \[::\]/\1listen [::]/g'
2023-05-03T08:19:10.894441042Z ++ for FILE in $FILES
2023-05-03T08:19:10.894453709Z ++ echo '- /data/nginx/proxy_host/7.conf'
2023-05-03T08:19:10.894387167Z - /data/nginx/proxy_host/7.conf
2023-05-03T08:19:10.894476626Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/7.conf
2023-05-03T08:19:10.901479591Z ++ for FILE in $FILES
2023-05-03T08:19:10.901601926Z ++ echo '- /data/nginx/proxy_host/14.conf'
2023-05-03T08:19:10.901619134Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/14.conf
2023-05-03T08:19:10.901488175Z - /data/nginx/proxy_host/14.conf
2023-05-03T08:19:10.907162714Z - /data/nginx/proxy_host/4.conf
2023-05-03T08:19:10.907162756Z ++ for FILE in $FILES
2023-05-03T08:19:10.907288882Z ++ echo '- /data/nginx/proxy_host/4.conf'
2023-05-03T08:19:10.907302715Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/4.conf
2023-05-03T08:19:10.913179006Z - /data/nginx/proxy_host/6.conf
2023-05-03T08:19:10.913213215Z ++ for FILE in $FILES
2023-05-03T08:19:10.913306841Z ++ echo '- /data/nginx/proxy_host/6.conf'
2023-05-03T08:19:10.913321341Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/6.conf
2023-05-03T08:19:10.918697253Z - /data/nginx/proxy_host/21.conf
2023-05-03T08:19:10.918697253Z ++ for FILE in $FILES
2023-05-03T08:19:10.918822713Z ++ echo '- /data/nginx/proxy_host/21.conf'
2023-05-03T08:19:10.918837671Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/21.conf
2023-05-03T08:19:10.924256626Z ++ for FILE in $FILES
2023-05-03T08:19:10.924397752Z ++ echo '- /data/nginx/proxy_host/20.conf'
2023-05-03T08:19:10.924414960Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/20.conf
2023-05-03T08:19:10.924303543Z - /data/nginx/proxy_host/20.conf
2023-05-03T08:19:10.929784706Z ++ for FILE in $FILES
2023-05-03T08:19:10.929896290Z ++ echo '- /data/nginx/proxy_host/1.conf'
2023-05-03T08:19:10.929956832Z - /data/nginx/proxy_host/1.conf
2023-05-03T08:19:10.929976249Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/1.conf
2023-05-03T08:19:10.935385620Z ++ for FILE in $FILES
2023-05-03T08:19:10.935506579Z ++ echo '- /data/nginx/proxy_host/19.conf'
2023-05-03T08:19:10.935522288Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/19.conf
2023-05-03T08:19:10.935424912Z - /data/nginx/proxy_host/19.conf
2023-05-03T08:19:10.941033118Z - /data/nginx/proxy_host/3.conf
2023-05-03T08:19:10.941032868Z ++ for FILE in $FILES
2023-05-03T08:19:10.941172160Z ++ echo '- /data/nginx/proxy_host/3.conf'
2023-05-03T08:19:10.941187452Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/3.conf
2023-05-03T08:19:10.946481739Z ++ for FILE in $FILES
2023-05-03T08:19:10.946587907Z ++ echo '- /data/nginx/proxy_host/15.conf'
2023-05-03T08:19:10.946604323Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/15.conf
2023-05-03T08:19:10.946651490Z - /data/nginx/proxy_host/15.conf
2023-05-03T08:19:10.952928201Z ++ for FILE in $FILES
2023-05-03T08:19:10.953040202Z ++ echo '- /data/nginx/proxy_host/5.conf'
2023-05-03T08:19:10.953055993Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/5.conf
2023-05-03T08:19:10.952933242Z - /data/nginx/proxy_host/5.conf
2023-05-03T08:19:10.958911493Z - /data/nginx/proxy_host/12.conf
2023-05-03T08:19:10.958911534Z ++ for FILE in $FILES
2023-05-03T08:19:10.959045369Z ++ echo '- /data/nginx/proxy_host/12.conf'
2023-05-03T08:19:10.959060494Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/12.conf
2023-05-03T08:19:10.965234662Z ++ for FILE in $FILES
2023-05-03T08:19:10.965366163Z ++ echo '- /data/nginx/proxy_host/9.conf'
2023-05-03T08:19:10.965382121Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/9.conf
2023-05-03T08:19:10.965243203Z - /data/nginx/proxy_host/9.conf
2023-05-03T08:19:10.971045744Z - /data/nginx/proxy_host/2.conf
2023-05-03T08:19:10.971045786Z ++ for FILE in $FILES
2023-05-03T08:19:10.971180120Z ++ echo '- /data/nginx/proxy_host/2.conf'
2023-05-03T08:19:10.971195620Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/2.conf
2023-05-03T08:19:10.977624290Z - /data/nginx/proxy_host/8.conf
2023-05-03T08:19:10.977624290Z ++ for FILE in $FILES
2023-05-03T08:19:10.977747957Z ++ echo '- /data/nginx/proxy_host/8.conf'
2023-05-03T08:19:10.977762333Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/8.conf
2023-05-03T08:19:10.983755249Z - /data/nginx/proxy_host/24.conf
2023-05-03T08:19:10.983755249Z ++ for FILE in $FILES
2023-05-03T08:19:10.983876959Z ++ echo '- /data/nginx/proxy_host/24.conf'
2023-05-03T08:19:10.983891250Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/24.conf
2023-05-03T08:19:10.989399122Z ++ for FILE in $FILES
2023-05-03T08:19:10.989502623Z ++ echo '- /data/nginx/proxy_host/11.conf'
2023-05-03T08:19:10.989587124Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/11.conf
2023-05-03T08:19:10.989399164Z - /data/nginx/proxy_host/11.conf
2023-05-03T08:19:10.994887619Z - /data/nginx/proxy_host/13.conf
2023-05-03T08:19:10.994887577Z ++ for FILE in $FILES
2023-05-03T08:19:10.995018370Z ++ echo '- /data/nginx/proxy_host/13.conf'
2023-05-03T08:19:10.995033453Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/13.conf
2023-05-03T08:19:11.001196330Z - /data/nginx/proxy_host/16.conf
2023-05-03T08:19:11.001360372Z ++ for FILE in $FILES
2023-05-03T08:19:11.001392123Z ++ echo '- /data/nginx/proxy_host/16.conf'
2023-05-03T08:19:11.001408206Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/16.conf
2023-05-03T08:19:11.007431790Z - /data/nginx/proxy_host/22.conf
2023-05-03T08:19:11.007575916Z ++ for FILE in $FILES
2023-05-03T08:19:11.007600291Z ++ echo '- /data/nginx/proxy_host/22.conf'
2023-05-03T08:19:11.007614583Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/22.conf
2023-05-03T08:19:11.014105711Z - /data/nginx/proxy_host/23.conf
2023-05-03T08:19:11.014255796Z ++ for FILE in $FILES
2023-05-03T08:19:11.014277254Z ++ echo '- /data/nginx/proxy_host/23.conf'
2023-05-03T08:19:11.014291713Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/23.conf
2023-05-03T08:19:11.020660424Z - /data/nginx/proxy_host/18.conf
2023-05-03T08:19:11.020804467Z ++ for FILE in $FILES
2023-05-03T08:19:11.020910759Z ++ echo '- /data/nginx/proxy_host/18.conf'
2023-05-03T08:19:11.020936217Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/18.conf
2023-05-03T08:19:11.027006302Z - /data/nginx/proxy_host/10.conf
2023-05-03T08:19:11.027151219Z ++ for FILE in $FILES
2023-05-03T08:19:11.027173636Z ++ echo '- /data/nginx/proxy_host/10.conf'
2023-05-03T08:19:11.027187678Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/10.conf
2023-05-03T08:19:11.034454854Z ++ for FILE in $FILES
2023-05-03T08:19:11.034916773Z ++ echo '- /data/nginx/proxy_host/17.conf'
2023-05-03T08:19:11.035213567Z - /data/nginx/proxy_host/17.conf
2023-05-03T08:19:11.035574945Z ++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/17.conf
2023-05-03T08:19:11.042531035Z ++ chown -R 0:0 /data/nginx
2023-05-03T08:19:11.047272151Z + . /etc/s6-overlay/s6-rc.d/prepare/60-secrets.sh
2023-05-03T08:19:11.052088310Z ++ set -e
2023-05-03T08:19:11.052640814Z ++ set -x
2023-05-03T08:19:11.052732315Z ++ log_info 'Docker secrets ...'
2023-05-03T08:19:11.052752315Z ++ echo -e '\E[1;34m? \E[1;36mDocker secrets ...\E[0m'
2023-05-03T08:19:11.052935358Z ? Docker secrets ...
2023-05-03T08:19:11.057221638Z +++ find /var/run/s6/container_environment/
2023-05-03T08:19:11.057355389Z +++ grep '__FILE$'
2023-05-03T08:19:11.071027276Z + . /etc/s6-overlay/s6-rc.d/prepare/90-banner.sh
2023-05-03T08:19:11.071883073Z ++ set -e
2023-05-03T08:19:11.072136492Z ++ echo '
2023-05-03T08:19:11.072218159Z -------------------------------------
2023-05-03T08:19:11.072234742Z _ _ ____ __ __
2023-05-03T08:19:11.072250659Z | \ | | _ \| \/ |
2023-05-03T08:19:11.072264243Z | \| | |_) | |\/| |
2023-05-03T08:19:11.072277743Z | |\ | __/| | | |
2023-05-03T08:19:11.072291285Z |_| \_|_| |_| |_|
2023-05-03T08:19:11.072304160Z -------------------------------------
2023-05-03T08:19:11.072317868Z User ID: 0
2023-05-03T08:19:11.072330743Z Group ID: 0
2023-05-03T08:19:11.072343035Z -------------------------------------
2023-05-03T08:19:11.072355660Z '
2023-05-03T08:19:11.072514328Z
2023-05-03T08:19:11.072541078Z -------------------------------------
2023-05-03T08:19:11.072579412Z _ _ ____ __ __
2023-05-03T08:19:11.072594412Z | \ | | _ \| \/ |
2023-05-03T08:19:11.072607870Z | \| | |_) | |\/| |
2023-05-03T08:19:11.072620620Z | |\ | __/| | | |
2023-05-03T08:19:11.072633162Z |_| \_|_| |_| |_|
2023-05-03T08:19:11.072660245Z -------------------------------------
2023-05-03T08:19:11.072673579Z User ID: 0
2023-05-03T08:19:11.072686329Z Group ID: 0
2023-05-03T08:19:11.072699037Z -------------------------------------
2023-05-03T08:19:11.072712412Z
2023-05-03T08:19:11.074723593Z s6-rc: info: service prepare successfully started
2023-05-03T08:19:11.075342181Z s6-rc: info: service nginx: starting
2023-05-03T08:19:11.076481314Z s6-rc: info: service frontend: starting
2023-05-03T08:19:11.078456161Z s6-rc: info: service backend: starting
2023-05-03T08:19:11.087500016Z s6-rc: info: service nginx successfully started
2023-05-03T08:19:11.096041159Z s6-rc: info: service frontend successfully started
2023-05-03T08:19:11.101210778Z ? Starting nginx ...
2023-05-03T08:19:11.106811901Z s6-rc: info: service backend successfully started
2023-05-03T08:19:11.113453322Z s6-rc: info: service legacy-services: starting
2023-05-03T08:19:11.118795859Z + . /bin/common.sh
2023-05-03T08:19:11.119571448Z ++ set -e
2023-05-03T08:19:11.120214078Z ++ CYAN='\E[1;36m'
2023-05-03T08:19:11.120905332Z ++ BLUE='\E[1;34m'
2023-05-03T08:19:11.121627546Z ++ YELLOW='\E[1;33m'
2023-05-03T08:19:11.122438843Z ++ RED='\E[1;31m'
2023-05-03T08:19:11.123029139Z ++ RESET='\E[0m'
2023-05-03T08:19:11.123788686Z ++ export CYAN BLUE YELLOW RED RESET
2023-05-03T08:19:11.124524816Z ++ PUID=0
2023-05-03T08:19:11.125140279Z ++ PGID=0
2023-05-03T08:19:11.125807658Z ++ [[ 0 -ne 0 ]]
2023-05-03T08:19:11.126353329Z ++ export PUID PGID
2023-05-03T08:19:11.127318919Z + cd /app
2023-05-03T08:19:11.128200300Z + log_info 'Starting backend ...'
2023-05-03T08:19:11.128778179Z + echo -e '\E[1;34m? \E[1;36mStarting backend ...\E[0m'
2023-05-03T08:19:11.129599810Z ? Starting backend ...
2023-05-03T08:19:11.130546316Z + '[' '' = true ']'
2023-05-03T08:19:11.132180036Z + :
2023-05-03T08:19:11.133212418Z + s6-setuidgid npmuser bash -c 'export HOME=/tmp/npmuserhome;node --abort_on_uncaught_exception --max_old_space_size=250 index.js'
2023-05-03T08:19:11.188379054Z s6-rc: info: service legacy-services successfully started
2023-05-03T08:19:14.621339866Z [5/3/2023] [8:19:14 AM] [Global ] › ? info Using Sqlite: /data/database.sqlite
2023-05-03T08:19:21.896527555Z [5/3/2023] [8:19:21 AM] [Migrate ] › ? info Current database version: none
2023-05-03T08:19:22.068220367Z [5/3/2023] [8:19:22 AM] [Setup ] › ? info Logrotate Timer initialized
2023-05-03T08:19:22.228422294Z [5/3/2023] [8:19:22 AM] [Setup ] › ? warning Error: Command failed: logrotate /etc/logrotate.d/nginx-proxy-manager
2023-05-03T08:19:22.228538211Z error: skipping "/data/logs/fallback_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228559253Z error: skipping "/data/logs/proxy-host-10_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228575670Z error: skipping "/data/logs/proxy-host-11_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228592629Z error: skipping "/data/logs/proxy-host-12_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228609504Z error: skipping "/data/logs/proxy-host-13_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228689338Z error: skipping "/data/logs/proxy-host-14_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228706171Z error: skipping "/data/logs/proxy-host-15_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228729255Z error: skipping "/data/logs/proxy-host-16_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228745421Z error: skipping "/data/logs/proxy-host-17_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228761088Z error: skipping "/data/logs/proxy-host-18_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228776755Z error: skipping "/data/logs/proxy-host-19_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228792130Z error: skipping "/data/logs/proxy-host-1_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228807464Z error: skipping "/data/logs/proxy-host-20_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.228822505Z error: skipping "/data/logs/proxy-host-21_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229039424Z error: skipping "/data/logs/proxy-host-22_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229075299Z error: skipping "/data/logs/proxy-host-23_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229093341Z error: skipping "/data/logs/proxy-host-24_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229165258Z error: skipping "/data/logs/proxy-host-2_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229184008Z error: skipping "/data/logs/proxy-host-3_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229199717Z error: skipping "/data/logs/proxy-host-4_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229215342Z error: skipping "/data/logs/proxy-host-5_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229232300Z error: skipping "/data/logs/proxy-host-6_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229256967Z error: skipping "/data/logs/proxy-host-7_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229272301Z error: skipping "/data/logs/proxy-host-8_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229287551Z error: skipping "/data/logs/proxy-host-9_access.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229303051Z error: skipping "/data/logs/fallback_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229318426Z error: skipping "/data/logs/proxy-host-10_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229333801Z error: skipping "/data/logs/proxy-host-11_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229348885Z error: skipping "/data/logs/proxy-host-12_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229397552Z error: skipping "/data/logs/proxy-host-13_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229419802Z error: skipping "/data/logs/proxy-host-14_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229438260Z error: skipping "/data/logs/proxy-host-15_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229453802Z error: skipping "/data/logs/proxy-host-16_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229468886Z error: skipping "/data/logs/proxy-host-17_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229484844Z error: skipping "/data/logs/proxy-host-18_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229500636Z error: skipping "/data/logs/proxy-host-19_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229515803Z error: skipping "/data/logs/proxy-host-1_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229531178Z error: skipping "/data/logs/proxy-host-20_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229549011Z error: skipping "/data/logs/proxy-host-21_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229565011Z error: skipping "/data/logs/proxy-host-22_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229582387Z error: skipping "/data/logs/proxy-host-23_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229622554Z error: skipping "/data/logs/proxy-host-24_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229639095Z error: skipping "/data/logs/proxy-host-2_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229655095Z error: skipping "/data/logs/proxy-host-3_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229672054Z error: skipping "/data/logs/proxy-host-4_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229687096Z error: skipping "/data/logs/proxy-host-5_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229707388Z error: skipping "/data/logs/proxy-host-6_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229723513Z error: skipping "/data/logs/proxy-host-7_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229739055Z error: skipping "/data/logs/proxy-host-8_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229754388Z error: skipping "/data/logs/proxy-host-9_error.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.
2023-05-03T08:19:22.229769221Z
2023-05-03T08:19:22.229781805Z at ChildProcess.exithandler (node:child_process:402:12)
2023-05-03T08:19:22.229794555Z at ChildProcess.emit (node:events:513:28)
2023-05-03T08:19:22.229807013Z at maybeClose (node:internal/child_process:1100:16)
2023-05-03T08:19:22.229819472Z at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)
2023-05-03T08:19:22.231501069Z [5/3/2023] [8:19:22 AM] [IP Ranges] › ? info Fetching IP Ranges from online services...
2023-05-03T08:19:22.233192458Z [5/3/2023] [8:19:22 AM] [IP Ranges] › ? info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
2023-05-03T08:19:22.570887910Z [5/3/2023] [8:19:22 AM] [IP Ranges] › ? info Fetching https://www.cloudflare.com/ips-v4
2023-05-03T08:19:22.690744093Z [5/3/2023] [8:19:22 AM] [IP Ranges] › ? info Fetching https://www.cloudflare.com/ips-v6
2023-05-03T08:19:22.894402790Z [5/3/2023] [8:19:22 AM] [SSL ] › ? info Let's Encrypt Renewal Timer initialized
2023-05-03T08:19:22.898048528Z [5/3/2023] [8:19:22 AM] [SSL ] › ? info Renewing SSL certs close to expiry...
2023-05-03T08:19:22.927075680Z [5/3/2023] [8:19:22 AM] [IP Ranges] › ? info IP Ranges Renewal Timer initialized
2023-05-03T08:19:22.943693815Z [5/3/2023] [8:19:22 AM] [Global ] › ? info Backend PID 152 listening on port 3000 ...
2023-05-03T08:19:28.520534389Z [5/3/2023] [8:19:28 AM] [Nginx ] › ? info Reloading Nginx
2023-05-03T08:19:29.041408456Z [5/3/2023] [8:19:29 AM] [SSL ] › ? info Renew Complete
Synology DSM 7
Compose snippet:
nginx_proxy_manager:
image: jc21/nginx-proxy-manager:github-s6-verbose
#image: jc21/nginx-proxy-manager:2.9.22
#image: jc21/nginx-proxy-manager:latest
container_name: nginx_proxy_manager
profiles:
- all
- core
network_mode: synobridge
environment:
- TZ=America/New_York
- PUID=0
- PGID=0
# - S6_CMD_WAIT_FOR_SERVICES_MAXTIME=60000
ports:
- "8341:80"
- "81:81"
- "8766:443"
volumes:
- /volume1/docker/npm/config.json:/app/config/production.json
- /volume1/docker/npm/data:/data
- /volume1/docker/npm/letsencrypt:/etc/letsencrypt
restart: unless-stopped
Logs (newest on top). I deleted the old Docker container. Rebooted system. Compose runs on startup. Fails to fully start. No idea why the logs aren't verbose...
2023-05-03T10:55:29.733293858Z,stderr,/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
2023-05-03T10:55:29.656262895Z,stderr,s6-rc: warning: unable to start service prepare: command exited 111
2023-05-03T10:55:29.655703172Z,stderr,s6-sudoc: fatal: unable to get exit status from server: Operation timed out
2023-05-03T10:55:28.200725159Z,stderr,++ usermod -G 0 npmuser
2023-05-03T10:55:24.903663973Z,stdout,[1;34m❯ [1;36mConfiguring npmuser ...[0m
2023-05-03T10:55:24.903619144Z,stderr,++ useradd -o -u 0 -U -d /tmp/npmuserhome -s /bin/false npmuser
2023-05-03T10:55:24.903592244Z,stderr,id: 'npmuser': no such user
2023-05-03T10:55:24.903555880Z,stderr,++ id -u npmuser
2023-05-03T10:55:24.903519780Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
2023-05-03T10:55:24.903387467Z,stderr,++ log_info 'Configuring npmuser ...'
2023-05-03T10:55:24.789618966Z,stderr,s6-rc: info: service prepare: starting
2023-05-03T10:55:24.789487489Z,stderr,s6-rc: info: service legacy-cont-init successfully started
2023-05-03T10:55:24.766363588Z,stderr,s6-rc: info: service legacy-cont-init: starting
2023-05-03T10:55:24.766189247Z,stderr,s6-rc: info: service fix-attrs successfully started
2023-05-03T10:55:24.657996454Z,stderr,s6-rc: info: service fix-attrs: starting
2023-05-03T10:55:24.657519180Z,stderr,s6-rc: info: service s6rc-oneshot-runner successfully started
2023-05-03T10:55:24.652585294Z,stderr,s6-rc: info: service s6rc-oneshot-runner: starting
Restarted container. All good. Definitely still seems like a system boot issue.
2023-05-03T11:02:35.117275621Z,stdout,[5/3/2023] [7:02:35 AM] [SSL ] › ℹ info Renew Complete
2023-05-03T11:02:35.061748478Z,stdout,[5/3/2023] [7:02:35 AM] [Nginx ] › ℹ info Reloading Nginx
2023-05-03T11:02:33.491577733Z,stdout,[5/3/2023] [7:02:33 AM] [Global ] › ℹ info Backend PID 124 listening on port 3000 ...
2023-05-03T11:02:33.487561131Z,stdout,[5/3/2023] [7:02:33 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized
2023-05-03T11:02:33.482296379Z,stdout,[5/3/2023] [7:02:33 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
2023-05-03T11:02:33.481841217Z,stdout,[5/3/2023] [7:02:33 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized
2023-05-03T11:02:33.375416663Z,stdout,[5/3/2023] [7:02:33 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
2023-05-03T11:02:33.268041428Z,stdout,[5/3/2023] [7:02:33 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
2023-05-03T11:02:33.076261815Z,stdout,[5/3/2023] [7:02:33 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
2023-05-03T11:02:33.076225215Z,stdout,[5/3/2023] [7:02:33 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
2023-05-03T11:02:33.076081702Z,stdout,[5/3/2023] [7:02:33 AM] [Setup ] › ℹ info Logrotate completed.
2023-05-03T11:02:32.762710650Z,stdout,[5/3/2023] [7:02:32 AM] [Setup ] › ℹ info Logrotate Timer initialized
2023-05-03T11:02:32.762226104Z,stdout,[5/3/2023] [7:02:32 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-namecheap~=1.0.0
2023-05-03T11:02:23.665880851Z,stdout,[5/3/2023] [7:02:23 AM] [Migrate ] › ℹ info Current database version: none
2023-05-03T11:02:19.604041718Z,stdout,[5/3/2023] [7:02:19 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite
2023-05-03T11:02:18.812638920Z,stderr,"nginx: [warn] the \"http2_max_header_size\" directive is obsolete, use the \"large_client_header_buffers\" directive instead in /data/nginx/proxy_host/3.conf:52
"
2023-05-03T11:02:18.812464210Z,stderr,"nginx: [warn] the \"http2_max_field_size\" directive is obsolete, use the \"large_client_header_buffers\" directive instead in /data/nginx/proxy_host/3.conf:51
"
2023-05-03T11:02:18.555060631Z,stderr,s6-rc: info: service legacy-services successfully started
2023-05-03T11:02:18.530945162Z,stdout,[1;34m❯ [1;36mStarting backend ...[0m
2023-05-03T11:02:18.531345351Z,stderr,+ s6-setuidgid npmuser bash -c 'export HOME=/tmp/npmuserhome;node --abort_on_uncaught_exception --max_old_space_size=250 index.js'
2023-05-03T11:02:18.531322680Z,stderr,+ :
2023-05-03T11:02:18.531298919Z,stderr,+ '[' '' = true ']'
2023-05-03T11:02:18.531270121Z,stderr,+ echo -e '\E[1;34m❯ \E[1;36mStarting backend ...\E[0m'
2023-05-03T11:02:18.531242395Z,stderr,+ log_info 'Starting backend ...'
2023-05-03T11:02:18.531214735Z,stderr,+ cd /app
2023-05-03T11:02:18.531177603Z,stderr,++ export PUID PGID
2023-05-03T11:02:18.531109099Z,stderr,++ [[ 0 -ne 0 ]]
2023-05-03T11:02:18.531041683Z,stderr,++ PGID=0
2023-05-03T11:02:18.530971781Z,stderr,++ PUID=0
2023-05-03T11:02:18.530893520Z,stderr,++ export CYAN BLUE YELLOW RED RESET
2023-05-03T11:02:18.530824509Z,stderr,++ RESET='\E[0m'
2023-05-03T11:02:18.530794178Z,stderr,++ RED='\E[1;31m'
2023-05-03T11:02:18.530723690Z,stderr,++ YELLOW='\E[1;33m'
2023-05-03T11:02:18.530686197Z,stderr,++ BLUE='\E[1;34m'
2023-05-03T11:02:18.530614647Z,stderr,++ CYAN='\E[1;36m'
2023-05-03T11:02:18.530584754Z,stderr,++ set -e
2023-05-03T11:02:18.530491158Z,stderr,+ . /bin/common.sh
2023-05-03T11:02:18.528998122Z,stderr,s6-rc: info: service legacy-services: starting
2023-05-03T11:02:18.528428677Z,stderr,s6-rc: info: service backend successfully started
2023-05-03T11:02:18.526679580Z,stdout,[1;34m❯ [1;36mStarting nginx ...[0m
2023-05-03T11:02:18.525765468Z,stderr,s6-rc: info: service frontend successfully started
2023-05-03T11:02:18.524886762Z,stderr,s6-rc: info: service nginx successfully started
2023-05-03T11:02:18.523642057Z,stderr,s6-rc: info: service backend: starting
2023-05-03T11:02:18.523397195Z,stderr,s6-rc: info: service frontend: starting
2023-05-03T11:02:18.523241831Z,stdout,
2023-05-03T11:02:18.523214280Z,stdout,-------------------------------------
2023-05-03T11:02:18.523173604Z,stderr,s6-rc: info: service nginx: starting
2023-05-03T11:02:18.523139491Z,stderr,s6-rc: info: service prepare successfully started
2023-05-03T11:02:18.523103051Z,stderr,'
2023-05-03T11:02:18.523081001Z,stderr,-------------------------------------
2023-05-03T11:02:18.523056211Z,stderr,Group ID: 0
2023-05-03T11:02:18.523033080Z,stderr,User ID: 0
2023-05-03T11:02:18.523003348Z,stderr,-------------------------------------
2023-05-03T11:02:18.522980437Z,stderr,|_| \_|_| |_| |_|
2023-05-03T11:02:18.522949689Z,stderr,| |\ | __/| | | |
2023-05-03T11:02:18.522927130Z,stderr,| \| | |_) | |\/| |
2023-05-03T11:02:18.522903928Z,stderr,| \ | | _ \| \/ |
2023-05-03T11:02:18.522880764Z,stderr, _ _ ____ __ __
2023-05-03T11:02:18.522853601Z,stderr,-------------------------------------
2023-05-03T11:02:18.522830243Z,stderr,++ echo '
2023-05-03T11:02:18.522374018Z,stderr,++ set -e
2023-05-03T11:02:18.522750811Z,stdout,Group ID: 0
2023-05-03T11:02:18.522700915Z,stdout,User ID: 0
2023-05-03T11:02:18.522674635Z,stdout,-------------------------------------
2023-05-03T11:02:18.522593876Z,stdout,|_| \_|_| |_| |_|
2023-05-03T11:02:18.522568699Z,stdout,| |\ | __/| | | |
2023-05-03T11:02:18.522517450Z,stdout,| \| | |_) | |\/| |
2023-05-03T11:02:18.522491079Z,stdout,| \ | | _ \| \/ |
2023-05-03T11:02:18.522436274Z,stdout, _ _ ____ __ __
2023-05-03T11:02:18.522407578Z,stdout,-------------------------------------
2023-05-03T11:02:18.522356778Z,stdout,
2023-05-03T11:02:18.522202338Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/90-banner.sh
2023-05-03T11:02:18.513286012Z,stderr,+++ grep '__FILE$'
2023-05-03T11:02:18.513153363Z,stderr,+++ find /var/run/s6/container_environment/
2023-05-03T11:02:18.512545648Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mDocker secrets ...\E[0m'
2023-05-03T11:02:18.512532457Z,stdout,[1;34m❯ [1;36mDocker secrets ...[0m
2023-05-03T11:02:18.512495570Z,stderr,++ log_info 'Docker secrets ...'
2023-05-03T11:02:18.512375083Z,stderr,++ set -x
2023-05-03T11:02:18.512335947Z,stderr,++ set -e
2023-05-03T11:02:18.511946398Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/60-secrets.sh
2023-05-03T11:02:18.510832686Z,stderr,++ chown -R 0:0 /data/nginx
2023-05-03T11:02:18.508892893Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/5.conf
2023-05-03T11:02:18.508865086Z,stderr,++ echo '- /data/nginx/proxy_host/5.conf'
2023-05-03T11:02:18.508727518Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.508718399Z,stdout,- /data/nginx/proxy_host/5.conf
2023-05-03T11:02:18.506735864Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/3.conf
2023-05-03T11:02:18.506708796Z,stderr,++ echo '- /data/nginx/proxy_host/3.conf'
2023-05-03T11:02:18.506676476Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.506526034Z,stdout,- /data/nginx/proxy_host/3.conf
2023-05-03T11:02:18.495164508Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/4.conf
2023-05-03T11:02:18.495052302Z,stderr,++ echo '- /data/nginx/proxy_host/4.conf'
2023-05-03T11:02:18.494975466Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.494962350Z,stdout,- /data/nginx/proxy_host/4.conf
2023-05-03T11:02:18.494379571Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/1.conf
2023-05-03T11:02:18.494356552Z,stderr,++ echo '- /data/nginx/proxy_host/1.conf'
2023-05-03T11:02:18.494333868Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.494305908Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /data/nginx/proxy_host/2.conf
2023-05-03T11:02:18.494282689Z,stderr,++ echo '- /data/nginx/proxy_host/2.conf'
2023-05-03T11:02:18.494259643Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.494236055Z,stderr,++ SED_REGEX='s/^(\s*)#listen \[::\]/\1listen [::]/g'
2023-05-03T11:02:18.494211568Z,stderr,++ echo 'Enabling IPV6 in hosts in: /data/nginx'
2023-05-03T11:02:18.494189227Z,stderr,++ '[' '' == yes ']'
2023-05-03T11:02:18.494163759Z,stderr,++ '[' '' == 1 ']'
2023-05-03T11:02:18.494142379Z,stderr,++ '[' '' == on ']'
2023-05-03T11:02:18.494120504Z,stderr,++ '[' '' == true ']'
2023-05-03T11:02:18.494098820Z,stderr,++ SED_REGEX=
2023-05-03T11:02:18.494075283Z,stderr,/data/nginx/proxy_host/5.conf'
2023-05-03T11:02:18.494046180Z,stderr,/data/nginx/proxy_host/3.conf
2023-05-03T11:02:18.494021428Z,stderr,/data/nginx/proxy_host/4.conf
2023-05-03T11:02:18.493987720Z,stderr,/data/nginx/proxy_host/1.conf
2023-05-03T11:02:18.493961522Z,stderr,++ FILES='/data/nginx/proxy_host/2.conf
2023-05-03T11:02:18.493928644Z,stderr,+++ find /data/nginx -type f -name '*.conf'
2023-05-03T11:02:18.493905837Z,stderr,++ process_folder /data/nginx
2023-05-03T11:02:18.493881435Z,stderr,++ chown -R 0:0 /etc/nginx/conf.d
2023-05-03T11:02:18.493855364Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/production.conf
2023-05-03T11:02:18.493826413Z,stderr,++ echo '- /etc/nginx/conf.d/production.conf'
2023-05-03T11:02:18.493802480Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493776061Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/resolvers.conf
2023-05-03T11:02:18.493748611Z,stderr,++ echo '- /etc/nginx/conf.d/include/resolvers.conf'
2023-05-03T11:02:18.493725561Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493699010Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T11:02:18.493674445Z,stderr,++ echo '- /etc/nginx/conf.d/include/ssl-ciphers.conf'
2023-05-03T11:02:18.493645629Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493618873Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/proxy.conf
2023-05-03T11:02:18.493592980Z,stderr,++ echo '- /etc/nginx/conf.d/include/proxy.conf'
2023-05-03T11:02:18.493569863Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493542208Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T11:02:18.493516377Z,stderr,++ echo '- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf'
2023-05-03T11:02:18.493488974Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493462809Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T11:02:18.493438861Z,stderr,++ echo '- /etc/nginx/conf.d/include/ip_ranges.conf'
2023-05-03T11:02:18.493415630Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493388538Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T11:02:18.493363424Z,stderr,++ echo '- /etc/nginx/conf.d/include/force-ssl.conf'
2023-05-03T11:02:18.493334206Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.493304875Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T11:02:18.493039879Z,stderr,++ echo '- /etc/nginx/conf.d/include/block-exploits.conf'
2023-05-03T11:02:18.493253321Z,stdout,- /data/nginx/proxy_host/1.conf
2023-05-03T11:02:18.493226566Z,stdout,- /data/nginx/proxy_host/2.conf
2023-05-03T11:02:18.493193915Z,stdout,Enabling IPV6 in hosts in: /data/nginx
2023-05-03T11:02:18.493170310Z,stdout,- /etc/nginx/conf.d/production.conf
2023-05-03T11:02:18.493145263Z,stdout,- /etc/nginx/conf.d/include/resolvers.conf
2023-05-03T11:02:18.493116324Z,stdout,- /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T11:02:18.493092413Z,stdout,- /etc/nginx/conf.d/include/proxy.conf
2023-05-03T11:02:18.493064746Z,stdout,- /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T11:02:18.491962391Z,stdout,- /etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T11:02:18.492996422Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.492969380Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/include/assets.conf
2023-05-03T11:02:18.492944102Z,stderr,++ echo '- /etc/nginx/conf.d/include/assets.conf'
2023-05-03T11:02:18.492915284Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.492881844Z,stderr,++ sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' /etc/nginx/conf.d/default.conf
2023-05-03T11:02:18.492853918Z,stderr,++ echo '- /etc/nginx/conf.d/default.conf'
2023-05-03T11:02:18.492825060Z,stderr,++ for FILE in $FILES
2023-05-03T11:02:18.492799014Z,stderr,++ SED_REGEX='s/^(\s*)#listen \[::\]/\1listen [::]/g'
2023-05-03T11:02:18.492770419Z,stderr,++ echo 'Enabling IPV6 in hosts in: /etc/nginx/conf.d'
2023-05-03T11:02:18.492746608Z,stderr,++ '[' '' == yes ']'
2023-05-03T11:02:18.492724447Z,stderr,++ '[' '' == 1 ']'
2023-05-03T11:02:18.492701036Z,stderr,++ '[' '' == on ']'
2023-05-03T11:02:18.492670993Z,stderr,++ '[' '' == true ']'
2023-05-03T11:02:18.492648497Z,stderr,++ SED_REGEX=
2023-05-03T11:02:18.492622877Z,stderr,/etc/nginx/conf.d/production.conf'
2023-05-03T11:02:18.492593864Z,stderr,/etc/nginx/conf.d/include/resolvers.conf
2023-05-03T11:02:18.492568566Z,stderr,/etc/nginx/conf.d/include/ssl-ciphers.conf
2023-05-03T11:02:18.492542135Z,stderr,/etc/nginx/conf.d/include/proxy.conf
2023-05-03T11:02:18.492498692Z,stderr,/etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-05-03T11:02:18.492473367Z,stderr,/etc/nginx/conf.d/include/ip_ranges.conf
2023-05-03T11:02:18.492443722Z,stderr,/etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T11:02:18.492413316Z,stderr,/etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T11:02:18.492384044Z,stderr,/etc/nginx/conf.d/include/assets.conf
2023-05-03T11:02:18.492356388Z,stderr,++ FILES='/etc/nginx/conf.d/default.conf
2023-05-03T11:02:18.492276408Z,stderr,+++ find /etc/nginx/conf.d -type f -name '*.conf'
2023-05-03T11:02:18.492244750Z,stderr,++ process_folder /etc/nginx/conf.d
2023-05-03T11:02:18.492219464Z,stderr,++ DISABLE_IPV6=
2023-05-03T11:02:18.492187083Z,stderr,+++ tr '[:upper:]' '[:lower:]'
2023-05-03T11:02:18.492081527Z,stderr,+++ echo ''
2023-05-03T11:02:18.492052523Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mIPv6 ...\E[0m'
2023-05-03T11:02:18.492019965Z,stderr,++ log_info 'IPv6 ...'
2023-05-03T11:02:18.491785520Z,stderr,++ set -x
2023-05-03T11:02:18.491932084Z,stdout,- /etc/nginx/conf.d/include/force-ssl.conf
2023-05-03T11:02:18.491904211Z,stdout,- /etc/nginx/conf.d/include/block-exploits.conf
2023-05-03T11:02:18.491872902Z,stdout,- /etc/nginx/conf.d/include/assets.conf
2023-05-03T11:02:18.491847977Z,stdout,- /etc/nginx/conf.d/default.conf
2023-05-03T11:02:18.491816321Z,stdout,Enabling IPV6 in hosts in: /etc/nginx/conf.d
2023-05-03T11:02:17.748784928Z,stdout,[1;34m❯ [1;36mIPv6 ...[0m
2023-05-03T11:02:18.491741728Z,stderr,++ set -e
2023-05-03T11:02:18.491669274Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/50-ipv6.sh
2023-05-03T11:02:17.612940047Z,stderr,++ echo resolver '127.0.0.11 valid=10s;'
2023-05-03T11:02:17.572852353Z,stderr,"+++ awk 'BEGIN{ORS=\" \"} $1==\"nameserver\" { sub(/%.*$/,\"\",$2); print ($2 ~ \":\")? \"[\"$2\"]\": $2}' /etc/resolv.conf
"
2023-05-03T11:02:17.572783563Z,stderr,++ '[' '' == yes ']'
2023-05-03T11:02:17.572736334Z,stderr,++ '[' '' == 1 ']'
2023-05-03T11:02:17.572698653Z,stderr,++ '[' '' == on ']'
2023-05-03T11:02:17.572530921Z,stderr,++ '[' '' == true ']'
2023-05-03T11:02:17.572445140Z,stderr,++ DISABLE_IPV6=
2023-05-03T11:02:17.571047970Z,stderr,+++ tr '[:upper:]' '[:lower:]'
2023-05-03T11:02:17.570974591Z,stderr,+++ echo ''
2023-05-03T11:02:17.570575355Z,stdout,[1;34m❯ [1;36mDynamic resolvers ...[0m
2023-05-03T11:02:17.570622785Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mDynamic resolvers ...\E[0m'
2023-05-03T11:02:17.570536149Z,stderr,++ log_info 'Dynamic resolvers ...'
2023-05-03T11:02:17.570498651Z,stderr,++ set -x
2023-05-03T11:02:17.570294905Z,stderr,++ set -e
2023-05-03T11:02:17.570129142Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/40-dynamic.sh
2023-05-03T11:02:17.568502898Z,stderr,++ chown -R 0:0 /etc/nginx/nginx /etc/nginx/nginx.conf /etc/nginx/conf.d
2023-05-03T11:02:17.549990042Z,stderr,++ chown -R 0:0 /data /etc/letsencrypt /run/nginx /tmp/nginx /var/cache/nginx /var/lib/logrotate /var/lib/nginx /var/log/nginx
2023-05-03T11:02:17.548993849Z,stderr,++ chown root /tmp/nginx
2023-05-03T11:02:17.548965863Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mSetting ownership ...\E[0m'
2023-05-03T11:02:17.548926526Z,stdout,[1;34m❯ [1;36mSetting ownership ...[0m
2023-05-03T11:02:17.548883515Z,stderr,++ log_info 'Setting ownership ...'
2023-05-03T11:02:17.548853149Z,stderr,++ set -x
2023-05-03T11:02:17.548820206Z,stderr,++ set -e
2023-05-03T11:02:17.548738898Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
2023-05-03T11:02:17.547920836Z,stderr,++ chmod 644 /etc/logrotate.d/nginx-proxy-manager
2023-05-03T11:02:17.547115362Z,stderr,++ chmod -R 777 /var/cache/nginx
2023-05-03T11:02:17.545761947Z,stderr,++ chmod 777 /var/log/nginx/error.log
2023-05-03T11:02:17.526262734Z,stderr,++ touch /var/log/nginx/error.log
2023-05-03T11:02:17.493346631Z,stdout,[1;34m❯ [1;36mChecking paths ...[0m
2023-05-03T11:02:17.493265380Z,stderr,++ mkdir -p /data/nginx /data/custom_ssl /data/logs /data/access /data/nginx/default_host /data/nginx/default_www /data/nginx/proxy_host /data/nginx/redirection_host /data/nginx/stream /data/nginx/dead_host /data/nginx/temp /data/letsencrypt-acme-challenge /run/nginx /tmp/nginx/body /var/log/nginx /var/lib/nginx/cache/public /var/lib/nginx/cache/private /var/cache/nginx/proxy_temp
2023-05-03T11:02:17.493239219Z,stderr,++ '[' '!' -d /etc/letsencrypt ']'
2023-05-03T11:02:17.493194588Z,stderr,++ '[' '!' -d /data ']'
2023-05-03T11:02:17.493156708Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mChecking paths ...\E[0m'
2023-05-03T11:02:17.492996458Z,stderr,++ log_info 'Checking paths ...'
2023-05-03T11:02:17.492968523Z,stderr,++ set -x
2023-05-03T11:02:17.492837922Z,stderr,++ set -e
2023-05-03T11:02:17.477411709Z,stderr,+ . /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
2023-05-03T11:02:17.470723641Z,stderr,++ chown -R 0:0 /tmp/npmuserhome
2023-05-03T11:02:17.433321543Z,stderr,++ mkdir -p /tmp/npmuserhome
2023-05-03T11:02:16.792436235Z,stderr,++ groupmod -o -g 0 npmuser
2023-05-03T11:02:16.789516316Z,stderr,++ usermod -G 0 npmuser
2023-05-03T11:02:16.789312508Z,stderr,usermod: no changes
2023-05-03T11:02:16.787892967Z,stderr,++ usermod -u 0 npmuser
2023-05-03T11:02:16.787756764Z,stdout,0
2023-05-03T11:02:16.786864590Z,stderr,++ id -u npmuser
2023-05-03T11:02:16.786781777Z,stderr,++ echo -e '\E[1;34m❯ \E[1;36mConfiguring npmuser ...\E[0m'
2023-05-03T11:02:16.786745700Z,stderr,++ log_info 'Configuring npmuser ...'
2023-05-03T11:02:16.786574032Z,stdout,[1;34m❯ [1;36mConfiguring npmuser ...[0m
2023-05-03T11:02:16.782329882Z,stderr,s6-rc: info: service prepare: starting
2023-05-03T11:02:16.782236231Z,stderr,s6-rc: info: service legacy-cont-init successfully started
2023-05-03T11:02:16.779440928Z,stderr,s6-rc: info: service legacy-cont-init: starting
2023-05-03T11:02:16.779342805Z,stderr,s6-rc: info: service fix-attrs successfully started
2023-05-03T11:02:16.776350946Z,stderr,s6-rc: info: service fix-attrs: starting
2023-05-03T11:02:16.776257791Z,stderr,s6-rc: info: service s6rc-oneshot-runner successfully started
2023-05-03T11:02:16.773918801Z,stderr,s6-rc: info: service s6rc-oneshot-runner: starting
2023-05-03T11:01:58.780694234Z,stderr,s6-rc: info: service s6rc-oneshot-runner successfully stopped
2023-05-03T11:01:58.778922960Z,stderr,s6-rc: info: service s6rc-oneshot-runner: stopping
2023-05-03T11:01:58.778846984Z,stderr,s6-rc: info: service fix-attrs successfully stopped
2023-05-03T11:01:58.777877135Z,stderr,s6-rc: info: service fix-attrs: stopping
2023-05-03T11:01:58.777745345Z,stderr,s6-rc: info: service legacy-cont-init successfully stopped
2023-05-03T11:01:58.773658521Z,stderr,s6-rc: info: service legacy-cont-init: stopping
If I delete the container again and run my docker compose up
manually instead of on-boot, NPM starts up fine similar to what I see in the second set of logs above.
EDIT: I also tried adding a 5 minute sleep before bringing up NPM on boot, thinking it may be a race condition with some process it needs, but I see the same problem.
Checklist
jc21/nginx-proxy-manager:latest
docker image?Describe the bug
The
:latest
and2.10.0
image fails to start either with an existing configuration, or with a clean install.Nginx Proxy Manager Version
2.10.0
To Reproduce Steps to reproduce the behavior:
Expected behavior
The container should start
Screenshots
Operating System
Rpi
Additional context