Closed indapublic closed 9 months ago
Hi. This does not look like a container related issue but like a host OS related issue. Is it okay if I convert this issue to a discussion ?
@buchdag No problem off course
Switched to 0.8.0 (which one is running for two years on my previous server):
nginx-proxy-le | [Wed Sep 15 11:39:36 UTC 2021] Getting domain auth token for each domain
nginx-proxy-le | [Wed Sep 15 11:39:38 UTC 2021] Getting webroot for domain='subdomain.mydomain.com'
nginx-proxy-le | [Wed Sep 15 11:39:38 UTC 2021] Verifying: subdomain.mydomain.com
nginx-proxy_1 | WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
nginx-proxy_1 | is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded.
nginx-proxy_1 | forego | starting dockergen.1 on port 5000
nginx-proxy_1 | forego | starting nginx.1 on port 5100
nginx-proxy_1 | dockergen.1 | 2021/09/15 11:39:22 Generated '/etc/nginx/conf.d/default.conf' from 5 containers
nginx-proxy_1 | dockergen.1 | 2021/09/15 11:39:22 Running 'nginx -s reload'
nginx-proxy_1 | dockergen.1 | 2021/09/15 11:39:23 Watching docker events
nginx-proxy_1 | dockergen.1 | 2021/09/15 11:39:23 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1 | 2021/09/15 11:39:32 [notice] 89#89: signal process started
nginx-proxy_1 | Generating DSA parameters, 4096 bit long prime
nginx-proxy_1 | dhparam generation complete, reloading nginx
Well if going back to 0.8.0
fix the problem, I'd rather keep this as an issue.
Do you experience the issue with 0.9.0
as well ?
Same issue here. I'll try to going back to 0.8.0
.
UPDATE: 0.9.1
is working, 0.9.2
and 0.9.3
not.
Same issue pull from latest.
No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal
Docker version 20.10.6, build 370c289
0.9.1 throws an error on restart in relation to forgo.
Well if going back to
0.8.0
fix the problem, I'd rather keep this as an issue.Do you experience the issue with
0.9.0
as well ?
Latest working version is 0.9.1
for me
@halan, @etopian do you have more informations I could work with ? Host OS, Docker version ? @etopian container logs ?
Are you all using the debian
or alpine
version of nginx-proxy ?
The only change I see between https://github.com/nginx-proxy/nginx-proxy/compare/0.9.1...0.9.2 that could be relevant to this issue is the upgrade from nginx 1.19.10
to 1.21.0
.
Using the Debian version, when you stop the container v0.9.1
SIGQUIT: quit
PC=0x46bb01 m=0 sigcode=0
goroutine 0 [idle]:
runtime.futex(0x695550, 0x80, 0x0, 0x0, 0xffffffff00000000, 0x39, 0x8, 0x7f08975e65bb, 0x7fff3b49d1f8, 0x40bf1f, ...)
/usr/local/go/src/runtime/sys_linux_amd64.s:579 +0x21
runtime.futexsleep(0x695550, 0x0, 0xffffffffffffffff)
/usr/local/go/src/runtime/os_linux.go:44 +0x46
runtime.notesleep(0x695550)
/usr/local/go/src/runtime/lock_futex.go:159 +0x9f
runtime.mPark()
/usr/local/go/src/runtime/proc.go:1340 +0x39
runtime.stopm()
/usr/local/go/src/runtime/proc.go:2301 +0x92
runtime.findrunnable(0xc000025800, 0x0)
/usr/local/go/src/runtime/proc.go:2960 +0x72e
runtime.schedule()
/usr/local/go/src/runtime/proc.go:3169 +0x2d7
runtime.park_m(0xc00022cc00)
/usr/local/go/src/runtime/proc.go:3318 +0x9d
runtime.mcall(0x0)
/usr/local/go/src/runtime/asm_amd64.s:327 +0x5b
goroutine 1 [chan receive]:
main.runStart(0x690bc0, 0xc00018e170, 0x0, 0x0)
/go/forego/start.go:331 +0x518
main.main()
/go/forego/main.go:33 +0x26b
goroutine 33 [chan receive]:
main.(*Forego).monitorInterrupt(0xc00021c540)
/go/forego/start.go:157 +0x125
created by main.runStart
/go/forego/start.go:286 +0x1a7
goroutine 34 [IO wait]:
internal/poll.runtime_pollWait(0x7f08975eaf18, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000200258, 0x72, 0x1001, 0x1000, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000200240, 0xc0000dc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
os.(*File).read(...)
/usr/local/go/src/os/file_posix.go:31
os.(*File).Read(0xc00020e040, 0xc0000dc000, 0x1000, 0x1000, 0x400, 0x55d9e0, 0x1)
/usr/local/go/src/os/file.go:117 +0x77
bufio.(*Reader).Read(0xc00024e728, 0xc0000de000, 0x400, 0x400, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
main.(*OutletFactory).LineReader(0xc000214120, 0xc0002141c0, 0xc0002141b0, 0xb, 0x0, 0x5c0108, 0xc00020e040, 0x760000c00020bd00)
/go/forego/outlet.go:45 +0x2fb
created by main.(*Forego).startProcess
/go/forego/start.go:212 +0x437
goroutine 35 [IO wait]:
internal/poll.runtime_pollWait(0x7f08975ead48, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000200318, 0x72, 0x1001, 0x1000, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000200300, 0xc0000dd000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
os.(*File).read(...)
/usr/local/go/src/os/file_posix.go:31
os.(*File).Read(0xc00020e050, 0xc0000dd000, 0x1000, 0x1000, 0x400, 0x55d9e0, 0x1)
/usr/local/go/src/os/file.go:117 +0x77
bufio.(*Reader).Read(0xc000185f28, 0xc000118800, 0x400, 0x400, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
main.(*OutletFactory).LineReader(0xc000214120, 0xc0002141c0, 0xc0002141b0, 0xb, 0x0, 0x5c0108, 0xc00020e050, 0x760000c00020bd01)
/go/forego/outlet.go:45 +0x2fb
created by main.(*Forego).startProcess
/go/forego/start.go:213 +0x4d7
goroutine 20 [syscall]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:168 +0xa5
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
/usr/local/go/src/os/signal/signal.go:151 +0x45
goroutine 36 [semacquire]:
sync.runtime_Semacquire(0xc0002141c8)
/usr/local/go/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0002141c0)
/usr/local/go/src/sync/waitgroup.go:130 +0x65
main.(*Forego).startProcess.func1(0xc00021c540, 0xc000254000, 0xc0002141c0, 0xc000202690)
/go/forego/start.go:230 +0x7d
created by main.(*Forego).startProcess
/go/forego/start.go:227 +0x805
goroutine 37 [select]:
main.(*Forego).startProcess.func2(0xc00021c540, 0xc000254000, 0x0, 0x0, 0xc000238000, 0x9, 0xc00023800b, 0x5a, 0xc000202480, 0xc000214120, ...)
/go/forego/start.go:238 +0xe5
created by main.(*Forego).startProcess
/go/forego/start.go:235 +0x8e5
goroutine 38 [IO wait]:
internal/poll.runtime_pollWait(0x7f08975eac60, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000200498, 0x72, 0x1001, 0x1000, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000200480, 0xc0000e0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
os.(*File).read(...)
/usr/local/go/src/os/file_posix.go:31
os.(*File).Read(0xc00020e068, 0xc0000e0000, 0x1000, 0x1000, 0x400, 0x55d9e0, 0x1)
/usr/local/go/src/os/file.go:117 +0x77
bufio.(*Reader).Read(0xc000250728, 0xc0000de800, 0x400, 0x400, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
main.(*OutletFactory).LineReader(0xc000214120, 0xc0002142c4, 0xc000214208, 0x7, 0x1, 0x5c0108, 0xc00020e068, 0x0)
/go/forego/outlet.go:45 +0x2fb
created by main.(*Forego).startProcess
/go/forego/start.go:212 +0x437
goroutine 39 [IO wait]:
internal/poll.runtime_pollWait(0x7f08975eab78, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000200558, 0x72, 0x1001, 0x1000, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000200540, 0xc0000e1000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
os.(*File).read(...)
/usr/local/go/src/os/file_posix.go:31
os.(*File).Read(0xc00020e078, 0xc0000e1000, 0x1000, 0x1000, 0x400, 0x55d9e0, 0x1)
/usr/local/go/src/os/file.go:117 +0x77
bufio.(*Reader).Read(0xc000250f28, 0xc0000dec00, 0x400, 0x400, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
main.(*OutletFactory).LineReader(0xc000214120, 0xc0002142c4, 0xc000214208, 0x7, 0x1, 0x5c0108, 0xc00020e078, 0x1)
/go/forego/outlet.go:45 +0x2fb
created by main.(*Forego).startProcess
/go/forego/start.go:213 +0x4d7
goroutine 40 [semacquire]:
sync.runtime_Semacquire(0xc0002142c4)
/usr/local/go/src/runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0002142c4)
/usr/local/go/src/sync/waitgroup.go:130 +0x65
main.(*Forego).startProcess.func1(0xc00021c540, 0xc000254060, 0xc0002142c4, 0xc000202840)
/go/forego/start.go:230 +0x7d
created by main.(*Forego).startProcess
/go/forego/start.go:227 +0x805
goroutine 41 [select]:
main.(*Forego).startProcess.func2(0xc00021c540, 0xc000254060, 0x1, 0x0, 0xc000214110, 0x5, 0xc000214117, 0x5, 0xc000202480, 0xc000214120, ...)
/go/forego/start.go:238 +0xe5
created by main.(*Forego).startProcess
/go/forego/start.go:235 +0x8e5
rax 0xca
rbx 0x695400
rcx 0x46bb03
rdx 0x0
rdi 0x695550
rsi 0x80
rbp 0x7fff3b49d1c0
rsp 0x7fff3b49d178
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0x0
r13 0x1
r14 0x5bb142
r15 0x0
rip 0x46bb01
rflags 0x286
cs 0x33
fs 0x0
gs 0x0
@etopian I might be wrong but I don't think that's the same issue or linked to the issue that @indapublic experienced.
I confirm this is an issue with the base nginx image:
[nduchon@host ~]$ docker run --rm -it nginx:1.21.3 bash
root@f88797fb24d5:/# nginx
2021/10/21 06:38:58 [notice] 8#8: using the "epoll" event method
2021/10/21 06:38:58 [notice] 8#8: nginx/1.21.3
2021/10/21 06:38:58 [notice] 8#8: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/10/21 06:38:58 [notice] 8#8: OS: Linux 5.10.47-linuxkit
2021/10/21 06:38:58 [notice] 8#8: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/10/21 06:38:58 [notice] 9#9: start worker processes
2021/10/21 06:38:58 [notice] 9#9: start worker process 10
2021/10/21 06:38:58 [notice] 9#9: start worker process 11
2021/10/21 06:38:58 [notice] 9#9: start worker process 12
2021/10/21 06:38:58 [notice] 9#9: start worker process 13
root@f88797fb24d5:/#
Edit: well that's actually not an issue, I think this getrlimit(RLIMIT_NOFILE)
log line is actually totally normal, flagging this message as out of date.
I have the same(?) issue (macOS 12.1, Docker 4.3.1, Compose 1.29.2). Even with the example configuration, the process shuts itself down immediately (seemingly without any error):
nginx_1 | Info: running nginx-proxy version 0.10.0-21-g3670d39
nginx_1 | Setting up DH Parameters..
nginx_1 | forego | starting dockergen.1 on port 5000
nginx_1 | forego | starting nginx.1 on port 5100
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: using the "epoll" event method
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: nginx/1.21.6
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: OS: Linux 5.10.76-linuxkit
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker processes
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker process 23
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker process 24
nginx_1 | dockergen.1 | 2022/02/21 12:02:20 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
nginx_1 | dockergen.1 | 2022/02/21 12:02:20 Running 'nginx -s reload'
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: signal 1 (SIGHUP) received from 26, reconfiguring
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: reconfiguring
nginx_1 | dockergen.1 | 2022/02/21 12:02:20 Watching docker events
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: using the "epoll" event method
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker processes
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker process 29
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: start worker process 30
nginx_1 | dockergen.1 | 2022/02/21 12:02:20 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 23#23: gracefully shutting down
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 24#24: gracefully shutting down
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 24#24: exiting
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 23#23: exiting
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 24#24: exit
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 23#23: exit
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: signal 17 (SIGCHLD) received from 24
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: worker process 24 exited with code 0
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: signal 29 (SIGIO) received
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: signal 17 (SIGCHLD) received from 23
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: worker process 23 exited with code 0
nginx_1 | nginx.1 | 2022/02/21 12:02:20 [notice] 17#17: signal 29 (SIGIO) received
Switching to nginxproxy/nginx-proxy:0.9.1-alpine
(as suggested for the last working version above) does not result in this behaviour. I haven't tried any versions in-between 0.9.1 and 0.10.0, but could do that if it would be helpful.
I think I also have this issue. The following docker-compose file from the documentation doesn't work on either Mac or Linux. Using the 0.9.1-alpine
tag does seem to work.
Interestingly, the problem is intermittent: sometimes docker-compose up
starts the service without the workers exiting, and sometimes it doesn't. I don't see any pattern in when it does or doesn't happen.
Compose file:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
Mac output (MacOS 12.3):
will@macbook:~/Downloads/foo$ docker --version
Docker version 20.10.13, build a224086
will@macbook:~/Downloads/foo$ docker-compose --version
Docker Compose version v2.3.3
will@macbook:~/Downloads/foo$ docker-compose up --remove-orphans
[+] Running 18/18
⠿ nginx-proxy Pulled 8.6s
⠿ 32252aec0777 Pull complete 6.1s
⠿ 8b8326e14b35 Pull complete 6.9s
⠿ b94b67132f03 Pull complete 6.9s
⠿ d921102ee329 Pull complete 6.9s
⠿ a8d44b6787b7 Pull complete 7.0s
⠿ 189a8d657745 Pull complete 7.0s
⠿ 69976cfdfd6a Pull complete 7.1s
⠿ 4b1292a0de53 Pull complete 7.2s
⠿ a28c63d3827c Pull complete 7.2s
⠿ 286dd3b9bec3 Pull complete 7.4s
⠿ 934a93d4bedc Pull complete 7.4s
⠿ a338ed02477e Pull complete 7.5s
⠿ 4f4fb700ef54 Pull complete 7.5s
⠿ whoami Pulled 1.8s
⠿ 605ce1bd3f31 Pull complete 0.7s
⠿ 6f87ebbce1a4 Pull complete 0.8s
⠿ 42dfabe11397 Pull complete 0.9s
[+] Running 3/3
⠿ Network foo_default Created 0.0s
⠿ Container foo-nginx-proxy-1 Created 0.2s
⠿ Container foo-whoami-1 Created 0.2s
Attaching to foo-nginx-proxy-1, foo-whoami-1
foo-nginx-proxy-1 | Info: running nginx-proxy version 1.0.0-8-g0442ed9
foo-nginx-proxy-1 | Setting up DH Parameters..
foo-nginx-proxy-1 | forego | starting dockergen.1 on port 5000
foo-nginx-proxy-1 | forego | starting nginx.1 on port 5100
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: using the "epoll" event method
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: nginx/1.21.6
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: OS: Linux 5.10.104-linuxkit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: getrlimit(RLIMIT_NOFILE): 1048576:1048576
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker processes
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 27
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 28
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 29
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 30
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 31
foo-whoami-1 | Listening on :8000
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Template error: open /etc/nginx/certs: no such file or directory
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Running 'nginx -s reload'
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 1 (SIGHUP) received from 33, reconfiguring
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: reconfiguring
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Watching docker events
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: using the "epoll" event method
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker processes
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 37
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 38
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 39
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 40
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: start worker process 41
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Template error: open /etc/nginx/certs: no such file or directory
foo-nginx-proxy-1 | dockergen.1 | 2022/03/27 22:05:33 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 27#27: gracefully shutting down
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 28#28: gracefully shutting down
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 29#29: gracefully shutting down
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 28#28: exiting
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 27#27: exiting
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 29#29: exiting
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 30#30: gracefully shutting down
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 28#28: exit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 29#29: exit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 30#30: exiting
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 27#27: exit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 30#30: exit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 31#31: gracefully shutting down
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 31#31: exiting
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 31#31: exit
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 17 (SIGCHLD) received from 30
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: worker process 30 exited with code 0
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: worker process 31 exited with code 0
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 29 (SIGIO) received
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 17 (SIGCHLD) received from 31
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: worker process 28 exited with code 0
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: worker process 29 exited with code 0
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 29 (SIGIO) received
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 17 (SIGCHLD) received from 27
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: worker process 27 exited with code 0
foo-nginx-proxy-1 | nginx.1 | 2022/03/27 22:05:33 [notice] 22#22: signal 29 (SIGIO) received
^CGracefully stopping... (press Ctrl+C again to force)
[+] Running 2/2
⠿ Container foo-whoami-1 Stopped 0.2s
⠿ Container foo-nginx-proxy-1 Stopped 0.2s
canceled
Linux output
ubuntu@ip-172-31-3-75:~/foo$ uname -a
Linux ip-172-31-3-75 5.13.0-1019-aws #21~20.04.1-Ubuntu SMP Wed Mar 16 11:54:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ip-172-31-3-75:~/foo$ docker --version
Docker version 20.10.14, build a224086
ubuntu@ip-172-31-3-75:~/foo$ docker-compose --version
docker-compose version 1.29.2, build unknown
ubuntu@ip-172-31-3-75:~/foo$ docker-compose up --remove-orphans
Starting foo_whoami_1 ... done
Starting foo_nginx-proxy_1 ... done
Attaching to foo_whoami_1, foo_nginx-proxy_1
whoami_1 | Listening on :8000
nginx-proxy_1 | Info: running nginx-proxy version 1.0.0-8-g0442ed9
nginx-proxy_1 | Warning: A custom dhparam.pem file was provided. Best practice is to use standardized RFC7919 DHE groups instead.
nginx-proxy_1 | forego | starting dockergen.1 on port 5000
nginx-proxy_1 | forego | starting nginx.1 on port 5100
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: using the "epoll" event method
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: nginx/1.21.6
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: OS: Linux 5.13.0-1019-aws
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker processes
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker process 21
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker process 22
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Template error: open /etc/nginx/certs: no such file or directory
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Running 'nginx -s reload'
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: signal 1 (SIGHUP) received from 25, reconfiguring
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: reconfiguring
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: using the "epoll" event method
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker processes
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker process 27
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: start worker process 28
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Watching docker events
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Template error: open /etc/nginx/certs: no such file or directory
nginx-proxy_1 | dockergen.1 | 2022/03/27 22:02:52 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 21#21: gracefully shutting down
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 22#22: gracefully shutting down
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 21#21: exiting
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 22#22: exiting
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 21#21: exit
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 22#22: exit
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: signal 17 (SIGCHLD) received from 21
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: worker process 21 exited with code 0
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: worker process 22 exited with code 0
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: signal 29 (SIGIO) received
nginx-proxy_1 | nginx.1 | 2022/03/27 22:02:52 [notice] 16#16: signal 17 (SIGCHLD) received from 22
^CGracefully stopping... (press Ctrl+C again to force)
Stopping foo_whoami_1 ... done
Stopping foo_nginx-proxy_1 ... done
I have exactly the same issue reported by https://github.com/nginx-proxy/nginx-proxy/issues/1780#issuecomment-1080029015
As other comments mentioned before, 0.9.1 was the latest version that worker for me.
$ docker --version
Docker version 20.10.17, build 100c70180f
$ docker-compose --version
docker-compose version 1.29.2, build unknown
$ uname -a
Linux alarmpi 5.10.83-1-rpi-legacy-ARCH #1 SMP Tue Dec 7 15:22:30 UTC 2021 armv7l GNU/Linux
same here,
GNU nano 2.9.3 docker-compose.yml Modified
version: '2'
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:0.9.1-alpine
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
SOLVER THE ERRROR LINE, but still does not working properly... but is and advance
have exactly the same issue. using latest docker version
Also see the same errors on the latest docker
, docker-compose
and linux
kernel, running with nginx-proxy
v1.3:
➜ system docker --version
Docker version 23.0.6, build ef23cbc431
➜ system docker-compose --version
Docker Compose version 2.18.0
➜ system uname -a
Linux arch-brix 6.3.2-arch1-1
still have the same issue even with downgrade
Unfortunately this issue is still impossible to reproduce in a reliable way (to me it seems to be host dependant).
I still haven't been able to reproduce it on a variety of host and container versions.
I have 2 Raspberries (model 2, with 1Gb of RAM) and I have this problem in both of them.
Found this same issue - had to revert to using version 0.8.0 Unable to understand the cause as I couldn't pin point the change - everything seemed to be working fine and today it stopped working with the latest version 1.3.1
Docker version 20.10.17, build 100c701 Docker Compose version v2.17.2
Found this issue too, downgraded to version 0.9.1
Hii @indapublic Did you fixed above issue I am facing same issue from last 1week ?
hey @kapil26021994 This was some time ago but I just downgraded to 0.9.1 here. Currently I'm not using nginx-proxy
package anymore, so can't say anything about actual versions
@kapil26021994 could you try nginxproxy/nginx-proxy:1780
or nginxproxy/nginx-proxy:1780-alpine
?
I tried to set worker_rlimit_nofile
to 2 x worker_connections
in /etc/nginx/nginx.conf
I'd like to clarify something for those who end up here thinking they're experiencing the same issue :
getrlimit(RLIMIT_NOFILE): xxxx:xxxx
is a normal message from nginx startup, not an error (logged with the NGX_LOG_NOTICE
level as opposed to NGX_LOG_ALERT
).signal 1 (SIGHUP) received from XX
is a normal message from nginx-proxy operation, not an error. It's actually docker-gen sending SIGHUP to the nginx process after the nginx configuration has been re-rendered, so nginx can load the new config.Hi @buchdag do you mind also building 1780
for linux/arm/v7
?
I'd like to test it on my Pi v2. Happy to build it myself if you point me to the right docker file.
Thanks!
hi @buchdag thanks for the update can you please confirm me 1 more thing i got below logs in the datadog can you confirm me below logs are error or not because due to this error my CD harness pipeline failed so i want confirmation:
2024/01/28 11:39:16 [notice] 1#1: start worker process 29 2024/01/28 11:39:16 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65535:65535 2024/01/28 11:39:16 [notice] 1#1: built by gcc 12.2.1 20220924 (Alpine 12.2.1_git20220924-r10) 2024/01/28 11:39:16 [notice] 1#1: nginx/1.25.3 2024/01/28 11:39:16 [notice] 1#1: using the "epoll" event method /docker-entrypoint.sh: Configuration complete; ready for start up /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 2024/01/28 11:37:41 [notice] 1#1: worker process 29 exited with code 0 2024/01/28 11:37:41 [notice] 1#1: signal 17 (SIGCHLD) received from 29 2024/01/28 11:37:41 [notice] 1#1: signal 29 (SIGIO) received 2024/01/28 11:37:41 [notice] 1#1: worker process 30 exited with code 0 2024/01/28 11:37:41 [notice] 1#1: signal 17 (SIGCHLD) received from 30 2024/01/28 11:37:41 [notice] 30#30: exit 2024/01/28 11:37:41 [notice] 29#29: exiting 2024/01/28 11:37:41 [notice] 30#30: exiting 2024/01/28 11:37:41 [notice] 1#1: signal 15 (SIGTERM) received, exiting 2024/01/28 11:37:41 [notice] 29#29: exit**
@kapil26021994 I can't say much from this log output other than nginx is receiving SIGTERM at the end and exiting as it should.
@danifr I've pushed a new build of 1780
with linux/arm/v7
Thanks a lot, I went for 1780-alpine
, so far so good, but I will leave it running for a few days.
I will report again in one week or so.
One week later, nginx-proxy container is still running on my Raspberry Pi 2 (1Gb of RAM). During this 7 days there were no errors nor service interruptions. So we can call this a success. Thanks a lot @buchdag
@danifr thanks for the feedback, I'll merge the change to main this week. I'm not yet certain we've identified and fixed the same issue that others here were experiencing though.
If anyone still experience this issue (nginx process shutting itself down immediately without obvious error on versions >= 0.9.2
but does not on version 0.9.1
) now that #2387 has been merged, please chime in.
Thanks for you great repository. Using it on many servers, works perfectly. But have one issue today on AWS and can't find solution to solve it. So I really hope to find help here.
docker-compose (used for many servers, only last one with issue):
docker-compose logs: