gramps-project / gramps-web

Online Genealogy System
https://www.grampsweb.org
GNU Affero General Public License v3.0
450 stars 60 forks source link

"503 Service Temporarily Unavailable" after creating DigitalOcean droplet #109

Closed fordprefect480 closed 2 years ago

fordprefect480 commented 2 years ago

I created a Gramps Web droplet (Version 1.0 / OS Ubuntu 20.04), waited for the droplet to be created, then waited an additional 5 minutes before SSHing into the machine. I then completed the 1-click app setup. Once that completed I opened up a browser hoping to see the Gramps GUI-based setup process however I received a 503 instead:

image

I've followed the instructions here and even though I haven't configured my domain to point to the droplet's public IP yet, I would still have expected to get a happy webpage here. I may well be missing a step that is obvious to devs who have completed the dev environment setup but is not mentioned in the Digital Ocean instructions?

Disclaimer: I am a dev however I'm quite new to docker and configuring linux distros. I'll do my best to get you what you need to help me figure out what's going on :)

SSH session:


 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Mon Sep 26 04:07:23 UTC 2022

  System load:              0.84
  Usage of /:               9.2% of 24.05GB
  Memory usage:             26%
  Swap usage:               0%
  Processes:                118
  Users logged in:          0
  IPv4 address for docker0: 172.17.0.1
  IPv4 address for eth0:    157.230.247.174
  IPv4 address for eth0:    10.15.0.5
  IPv6 address for eth0:    2400:6180:0:d0::fbf:5001
  IPv4 address for eth1:    10.104.0.2

0 updates can be applied immediately.

*** System restart required ***
Welcome to the Gramps Web DigitalOcean 1-click app setup!

Please enter the domain name you will use for Gramps Web:
<redacted>
Optionally, please enter the e-mail address that will be associated with your Let's Encrypt certificate:
<redacted>
Pulling grampsweb      ... done
Pulling proxy          ... done
Pulling acme-companion ... done
Creating network "grampsweb_proxy-tier" with the default driver
Creating network "grampsweb_default" with the default driver
Creating volume "grampsweb_acme" with default driver
Creating volume "grampsweb_certs" with default driver
Creating volume "grampsweb_conf" with default driver
Creating volume "grampsweb_dhparam" with default driver
Creating volume "grampsweb_vhost.d" with default driver
Creating volume "grampsweb_html" with default driver
Creating volume "grampsweb_gramps_users" with default driver
Creating volume "grampsweb_gramps_index" with default driver
Creating volume "grampsweb_gramps_thumb_cache" with default driver
Creating volume "grampsweb_gramps_secret" with default driver
Creating volume "grampsweb_gramps_db" with default driver
Creating grampsweb_grampsweb_1 ... done
Creating nginx-proxy           ... done
Creating nginx-proxy-acme      ... done
root@gramps-web:~#

I then restarted the droplet. Still received the 503.

Attempted to confirm that things are indeed running correctly:

root@gramps-web:/opt/grampsweb# docker-compose up -d
grampsweb_grampsweb_1 is up-to-date
nginx-proxy is up-to-date
nginx-proxy-acme is up-to-date
root@gramps-web:/opt/grampsweb# docker container ls
CONTAINER ID   IMAGE                                     COMMAND                  CREATED         STATUS         PORTS                                                                      NAMES
98e4e6f5c30d   nginxproxy/acme-companion                 "/bin/bash /app/entr…"   4 minutes ago   Up 4 minutes                                                                              nginx-proxy-acme
b45416aa8126   ghcr.io/gramps-project/grampsweb:latest   "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   5000/tcp                                                                   grampsweb_grampsweb_1
f9964c046275   nginxproxy/nginx-proxy                    "/app/docker-entrypo…"   4 minutes ago   Up 4 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   nginx-proxy
DavidMStraub commented 2 years ago

Hi,

I haven't tried using it without a domain name as the setup uses acme-companion to fetch a Let's Encrypt certificate, and I'm not sure if it even works without.

Things to try:

fordprefect480 commented 2 years ago

Resources seem ok. 70-80% memory usage is possibly a bit high. image

grampsweb logs - errors ahoy! ``` root@gramps-web:/opt/grampsweb# docker-compose logs grampsweb Attaching to grampsweb_grampsweb_1 grampsweb_1 | Unable to init server: Could not connect: Connection refused grampsweb_1 | Unable to init server: Could not connect: Connection refused grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | INFO:gramps_webapi:Rebuilding search index ... grampsweb_1 | INFO:gramps_webapi:Done building search index. grampsweb_1 | INFO [alembic.runtime.migration] Context impl SQLiteImpl. grampsweb_1 | INFO [alembic.runtime.migration] Will assume non-transactional DDL. grampsweb_1 | INFO [alembic.runtime.migration] Running upgrade -> c89728e71264, empty message grampsweb_1 | INFO [alembic.runtime.migration] Running upgrade c89728e71264 -> e5e738d09fa7, Added configuration table grampsweb_1 | [2022-09-26 04:09:41 +0000] [16] [INFO] Starting gunicorn 20.1.0 grampsweb_1 | [2022-09-26 04:09:41 +0000] [16] [INFO] Listening at: http://0.0.0.0:5000 (16) grampsweb_1 | [2022-09-26 04:09:41 +0000] [16] [INFO] Using worker: sync grampsweb_1 | [2022-09-26 04:09:41 +0000] [17] [INFO] Booting worker with pid: 17 grampsweb_1 | [2022-09-26 04:09:42 +0000] [19] [INFO] Booting worker with pid: 19 grampsweb_1 | [2022-09-26 04:09:42 +0000] [18] [INFO] Booting worker with pid: 18 grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | INFO [alembic.runtime.migration] Context impl SQLiteImpl. grampsweb_1 | INFO [alembic.runtime.migration] Will assume non-transactional DDL. grampsweb_1 | [2022-09-26 04:28:52 +0000] [10] [INFO] Starting gunicorn 20.1.0 grampsweb_1 | [2022-09-26 04:28:52 +0000] [10] [INFO] Listening at: http://0.0.0.0:5000 (10) grampsweb_1 | [2022-09-26 04:28:52 +0000] [10] [INFO] Using worker: sync grampsweb_1 | [2022-09-26 04:28:52 +0000] [11] [INFO] Booting worker with pid: 11 grampsweb_1 | [2022-09-26 04:28:52 +0000] [12] [INFO] Booting worker with pid: 12 grampsweb_1 | [2022-09-26 04:28:52 +0000] [13] [INFO] Booting worker with pid: 13 grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. grampsweb_1 | warnings.warn( grampsweb_1 | /usr/local/lib/python3.9/dist-packages/flask_limiter/extension.py:317: UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend. ```
proxy logs ``` root@gramps-web:/opt/grampsweb# docker-compose logs proxy Attaching to nginx-proxy nginx-proxy | Info: running nginx-proxy version 1.0.1-6-gc4ad18f nginx-proxy | Setting up DH Parameters.. nginx-proxy | forego | starting dockergen.1 on port 5000 nginx-proxy | forego | starting nginx.1 on port 5100 nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: nginx/1.21.6 nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: OS: Linux 5.4.0-124-generic nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: getrlimit(RLIMIT_NOFILE): 1048576:1048576 nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:09:32 [notice] 17#17: start worker process 22 nginx-proxy | dockergen.1 | 2022/09/26 04:09:33 Generated '/etc/nginx/conf.d/default.conf' from 3 containers nginx-proxy | dockergen.1 | 2022/09/26 04:09:33 Running 'nginx -s reload' nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: signal 1 (SIGHUP) received from 24, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: reconfiguring nginx-proxy | dockergen.1 | 2022/09/26 04:09:33 Watching docker events nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: start worker process 28 nginx-proxy | dockergen.1 | 2022/09/26 04:09:33 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload' nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 22#22: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 22#22: exiting nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 22#22: exit nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: signal 17 (SIGCHLD) received from 22 nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: worker process 22 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:09:33 [notice] 17#17: signal 29 (SIGIO) received nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: signal 1 (SIGHUP) received from 39, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: start worker process 40 nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 28#28: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 28#28: exiting nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 28#28: exit nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: signal 17 (SIGCHLD) received from 28 nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: worker process 28 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:09:36 [notice] 17#17: signal 29 (SIGIO) received nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: signal 1 (SIGHUP) received from 51, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: start worker process 52 nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 40#40: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 40#40: exiting nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 40#40: exit nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: signal 17 (SIGCHLD) received from 40 nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: worker process 40 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:09:37 [notice] 17#17: signal 29 (SIGIO) received nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: signal 1 (SIGHUP) received from 62, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: start worker process 63 nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 52#52: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 52#52: exiting nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 52#52: exit nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: signal 17 (SIGCHLD) received from 52 nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: worker process 52 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:09:42 [notice] 17#17: signal 29 (SIGIO) received nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:09:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:10:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | 157.230.247.174 61.245.157.86 - - [26/Sep/2022:04:11:09 +0000] "GET / HTTP/1.1" 503 190 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" "-" nginx-proxy | nginx.1 | 157.230.247.174 61.245.157.86 - - [26/Sep/2022:04:11:10 +0000] "GET /favicon.ico HTTP/1.1" 503 190 "http://157.230.247.174/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:11:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:12:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:13:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:14:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:15:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:16:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:17:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:18:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:19:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:20:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:21:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:22:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:23:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:24:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:25:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:26:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:27:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | SIGQUIT: quit nginx-proxy | PC=0x463aa1 m=0 sigcode=0 nginx-proxy | nginx-proxy | goroutine 0 [idle]: nginx-proxy | runtime.futex() nginx-proxy | /usr/local/go/src/runtime/sys_linux_amd64.s:552 +0x21 nginx-proxy | runtime.futexsleep(0x1a9?, 0x676ec0?, 0xc00001c000?) nginx-proxy | /usr/local/go/src/runtime/os_linux.go:66 +0x36 nginx-proxy | runtime.notesleep(0x677008) nginx-proxy | /usr/local/go/src/runtime/lock_futex.go:159 +0x87 nginx-proxy | runtime.mPark() nginx-proxy | /usr/local/go/src/runtime/proc.go:1449 +0x25 nginx-proxy | runtime.stoplockedm() nginx-proxy | /usr/local/go/src/runtime/proc.go:2422 +0x65 nginx-proxy | runtime.schedule() nginx-proxy | /usr/local/go/src/runtime/proc.go:3119 +0x3d nginx-proxy | runtime.park_m(0xc0000d0000?) nginx-proxy | /usr/local/go/src/runtime/proc.go:3336 +0x14d nginx-proxy | runtime.mcall() nginx-proxy | /usr/local/go/src/runtime/asm_amd64.s:425 +0x43 nginx-proxy | nginx-proxy | goroutine 1 [chan receive]: nginx-proxy | main.runStart(0x672548?, {0xc0000101a0, 0x0, 0xc00005e000?}) nginx-proxy | /go/forego/start.go:331 +0x33f nginx-proxy | main.main() nginx-proxy | /go/forego/main.go:33 +0x279 nginx-proxy | nginx-proxy | goroutine 7 [chan receive]: nginx-proxy | main.(*Forego).monitorInterrupt(0xc0000ac5b0) nginx-proxy | /go/forego/start.go:157 +0xe7 nginx-proxy | created by main.runStart nginx-proxy | /go/forego/start.go:286 +0x16d nginx-proxy | nginx-proxy | goroutine 8 [IO wait]: nginx-proxy | internal/poll.runtime_pollWait(0x7f79126f47a8, 0x72) nginx-proxy | /usr/local/go/src/runtime/netpoll.go:302 +0x89 nginx-proxy | internal/poll.(*pollDesc).wait(0xc0000524e0?, 0xc0000da000?, 0x1) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 nginx-proxy | internal/poll.(*pollDesc).waitRead(...) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:88 nginx-proxy | internal/poll.(*FD).Read(0xc0000524e0, {0xc0000da000, 0x1000, 0x1000}) nginx-proxy | /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a nginx-proxy | os.(*File).read(...) nginx-proxy | /usr/local/go/src/os/file_posix.go:31 nginx-proxy | os.(*File).Read(0xc00000e080, {0xc0000da000?, 0x400?, 0x53d300?}) nginx-proxy | /usr/local/go/src/os/file.go:119 +0x5e nginx-proxy | bufio.(*Reader).Read(0xc000030f20, {0xc0000dc000, 0x400, 0x0?}) nginx-proxy | /usr/local/go/src/bufio/bufio.go:236 +0x1b4 nginx-proxy | main.(*OutletFactory).LineReader(0x0?, 0x0?, {0xc000018210, 0xb}, 0x0?, {0x5998e8?, 0xc00000e080?}, 0x0?) nginx-proxy | /go/forego/outlet.go:45 +0x2b5 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:212 +0x398 nginx-proxy | nginx-proxy | goroutine 9 [IO wait]: nginx-proxy | internal/poll.runtime_pollWait(0x7f79126f45c8, 0x72) nginx-proxy | /usr/local/go/src/runtime/netpoll.go:302 +0x89 nginx-proxy | internal/poll.(*pollDesc).wait(0xc0000525a0?, 0xc0000db000?, 0x1) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 nginx-proxy | internal/poll.(*pollDesc).waitRead(...) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:88 nginx-proxy | internal/poll.(*FD).Read(0xc0000525a0, {0xc0000db000, 0x1000, 0x1000}) nginx-proxy | /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a nginx-proxy | os.(*File).read(...) nginx-proxy | /usr/local/go/src/os/file_posix.go:31 nginx-proxy | os.(*File).Read(0xc00000e090, {0xc0000db000?, 0x400?, 0x53d300?}) nginx-proxy | /usr/local/go/src/os/file.go:119 +0x5e nginx-proxy | bufio.(*Reader).Read(0xc000065f20, {0xc0000e3800, 0x400, 0xc0000e600a?}) nginx-proxy | /usr/local/go/src/bufio/bufio.go:236 +0x1b4 nginx-proxy | main.(*OutletFactory).LineReader(0x0?, 0x0?, {0xc000018210, 0xb}, 0x0?, {0x5998e8?, 0xc00000e090?}, 0x0?) nginx-proxy | /go/forego/outlet.go:45 +0x2b5 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:213 +0x48c nginx-proxy | nginx-proxy | goroutine 10 [semacquire]: nginx-proxy | sync.runtime_Semacquire(0x0?) nginx-proxy | /usr/local/go/src/runtime/sema.go:56 +0x25 nginx-proxy | sync.(*WaitGroup).Wait(0x0?) nginx-proxy | /usr/local/go/src/sync/waitgroup.go:136 +0x52 nginx-proxy | main.(*Forego).startProcess.func1() nginx-proxy | /go/forego/start.go:230 +0x8f nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:227 +0x745 nginx-proxy | nginx-proxy | goroutine 11 [select]: nginx-proxy | main.(*Forego).startProcess.func2() nginx-proxy | /go/forego/start.go:238 +0x173 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:235 +0x8b2 nginx-proxy | nginx-proxy | goroutine 12 [IO wait]: nginx-proxy | internal/poll.runtime_pollWait(0x7f79126f44d8, 0x72) nginx-proxy | /usr/local/go/src/runtime/netpoll.go:302 +0x89 nginx-proxy | internal/poll.(*pollDesc).wait(0xc000052720?, 0xc0000de000?, 0x1) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 nginx-proxy | internal/poll.(*pollDesc).waitRead(...) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:88 nginx-proxy | internal/poll.(*FD).Read(0xc000052720, {0xc0000de000, 0x1000, 0x1000}) nginx-proxy | /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a nginx-proxy | os.(*File).read(...) nginx-proxy | /usr/local/go/src/os/file_posix.go:31 nginx-proxy | os.(*File).Read(0xc00000e0a8, {0xc0000de000?, 0x400?, 0x53d300?}) nginx-proxy | /usr/local/go/src/os/file.go:119 +0x5e nginx-proxy | bufio.(*Reader).Read(0xc000066f20, {0xc0000f6000, 0x400, 0xc0000eef0a?}) nginx-proxy | /usr/local/go/src/bufio/bufio.go:236 +0x1b4 nginx-proxy | main.(*OutletFactory).LineReader(0x0?, 0x0?, {0xc000018268, 0x7}, 0x0?, {0x5998e8?, 0xc00000e0a8?}, 0x0?) nginx-proxy | /go/forego/outlet.go:45 +0x2b5 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:212 +0x398 nginx-proxy | nginx-proxy | goroutine 13 [IO wait]: nginx-proxy | internal/poll.runtime_pollWait(0x7f79126f43e8, 0x72) nginx-proxy | /usr/local/go/src/runtime/netpoll.go:302 +0x89 nginx-proxy | internal/poll.(*pollDesc).wait(0xc0000527e0?, 0xc0000df000?, 0x1) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 nginx-proxy | internal/poll.(*pollDesc).waitRead(...) nginx-proxy | /usr/local/go/src/internal/poll/fd_poll_runtime.go:88 nginx-proxy | internal/poll.(*FD).Read(0xc0000527e0, {0xc0000df000, 0x1000, 0x1000}) nginx-proxy | /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a nginx-proxy | os.(*File).read(...) nginx-proxy | /usr/local/go/src/os/file_posix.go:31 nginx-proxy | os.(*File).Read(0xc00000e0b8, {0xc0000df000?, 0x400?, 0x53d300?}) nginx-proxy | /usr/local/go/src/os/file.go:119 +0x5e nginx-proxy | bufio.(*Reader).Read(0xc000064f20, {0xc0000ecc00, 0x400, 0xc00008280a?}) nginx-proxy | /usr/local/go/src/bufio/bufio.go:236 +0x1b4 nginx-proxy | main.(*OutletFactory).LineReader(0x0?, 0x0?, {0xc000018268, 0x7}, 0x0?, {0x5998e8?, 0xc00000e0b8?}, 0x0?) nginx-proxy | /go/forego/outlet.go:45 +0x2b5 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:213 +0x48c nginx-proxy | nginx-proxy | goroutine 14 [semacquire]: nginx-proxy | sync.runtime_Semacquire(0x0?) nginx-proxy | /usr/local/go/src/runtime/sema.go:56 +0x25 nginx-proxy | sync.(*WaitGroup).Wait(0x0?) nginx-proxy | /usr/local/go/src/sync/waitgroup.go:136 +0x52 nginx-proxy | main.(*Forego).startProcess.func1() nginx-proxy | /go/forego/start.go:230 +0x8f nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:227 +0x745 nginx-proxy | nginx-proxy | goroutine 15 [select]: nginx-proxy | main.(*Forego).startProcess.func2() nginx-proxy | /go/forego/start.go:238 +0x173 nginx-proxy | created by main.(*Forego).startProcess nginx-proxy | /go/forego/start.go:235 +0x8b2 nginx-proxy | nginx-proxy | goroutine 17 [syscall]: nginx-proxy | os/signal.signal_recv() nginx-proxy | /usr/local/go/src/runtime/sigqueue.go:151 +0x2f nginx-proxy | os/signal.loop() nginx-proxy | /usr/local/go/src/os/signal/signal_unix.go:23 +0x19 nginx-proxy | created by os/signal.Notify.func1.1 nginx-proxy | /usr/local/go/src/os/signal/signal.go:151 +0x2a nginx-proxy | nginx-proxy | rax 0xca nginx-proxy | rbx 0x0 nginx-proxy | rcx 0x463aa3 nginx-proxy | rdx 0x0 nginx-proxy | rdi 0x677008 nginx-proxy | rsi 0x80 nginx-proxy | rbp 0x7ffd1d77b520 nginx-proxy | rsp 0x7ffd1d77b4d8 nginx-proxy | r8 0x0 nginx-proxy | r9 0x0 nginx-proxy | r10 0x0 nginx-proxy | r11 0x286 nginx-proxy | r12 0x43c140 nginx-proxy | r13 0x0 nginx-proxy | r14 0x676920 nginx-proxy | r15 0x7f7912850963 nginx-proxy | rip 0x463aa1 nginx-proxy | rflags 0x286 nginx-proxy | cs 0x33 nginx-proxy | fs 0x0 nginx-proxy | gs 0x0 nginx-proxy | Info: running nginx-proxy version 1.0.1-6-gc4ad18f nginx-proxy | Warning: A custom dhparam.pem file was provided. Best practice is to use standardized RFC7919 DHE groups instead. nginx-proxy | forego | starting dockergen.1 on port 5000 nginx-proxy | forego | starting nginx.1 on port 5100 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: nginx/1.21.6 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: OS: Linux 5.4.0-126-generic nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: getrlimit(RLIMIT_NOFILE): 1048576:1048576 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: start worker process 20 nginx-proxy | dockergen.1 | 2022/09/26 04:28:47 Generated '/etc/nginx/conf.d/default.conf' from 3 containers nginx-proxy | dockergen.1 | 2022/09/26 04:28:47 Running 'nginx -s reload' nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: signal 1 (SIGHUP) received from 22, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: reconfiguring nginx-proxy | dockergen.1 | 2022/09/26 04:28:47 Watching docker events nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: start worker process 24 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 20#20: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 20#20: exiting nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 20#20: exit nginx-proxy | dockergen.1 | 2022/09/26 04:28:47 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload' nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: signal 17 (SIGCHLD) received from 20 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: worker process 20 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:28:47 [notice] 15#15: signal 29 (SIGIO) received nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: signal 1 (SIGHUP) received from 34, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: start worker process 35 nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 24#24: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 24#24: exiting nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 24#24: exit nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: signal 17 (SIGCHLD) received from 24 nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: worker process 24 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:28:50 [notice] 15#15: signal 29 (SIGIO) received nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:28:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: signal 1 (SIGHUP) received from 45, reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: reconfiguring nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: using the "epoll" event method nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: start worker processes nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: start worker process 46 nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 35#35: gracefully shutting down nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 35#35: exiting nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 35#35: exit nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: signal 17 (SIGCHLD) received from 35 nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: worker process 35 exited with code 0 nginx-proxy | nginx.1 | 2022/09/26 04:28:51 [notice] 15#15: signal 29 (SIGIO) received nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:29:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | 157.230.247.174 40.77.167.96 - - [26/Sep/2022:04:29:52 +0000] "GET /robots.txt HTTP/2.0" 503 190 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-" nginx-proxy | nginx.1 | 157.230.247.174 40.77.167.96 - - [26/Sep/2022:04:29:52 +0000] "GET /robots.txt HTTP/2.0" 503 190 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-" nginx-proxy | nginx.1 | 157.230.247.174 207.46.13.4 - - [26/Sep/2022:04:29:55 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/103.0.5060.134 Safari/537.36" "-" nginx-proxy | nginx.1 | 157.230.247.174 61.245.157.86 - - [26/Sep/2022:04:30:29 +0000] "GET / HTTP/1.1" 503 190 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" "-" nginx-proxy | nginx.1 | 157.230.247.174 61.245.157.86 - - [26/Sep/2022:04:30:30 +0000] "GET / HTTP/1.1" 503 190 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:30:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" nginx-proxy | nginx.1 | vpngate.worklifebeyond.com 157.230.41.250 - - [26/Sep/2022:04:31:51 +0000] "GET https://vpngate.worklifebeyond.com HTTP/1.0" 503 190 "-" "check_http/v2.2 (monitoring-plugins 2.2)" "-" ... ```

Hmm there's nothing at cat /opt/grampsweb/letsencrypt.env, definitely the correct path?

root@gramps-web:/opt/grampsweb# cat /opt/grampsweb/letsencrypt.env
cat: /opt/grampsweb/letsencrypt.env: No such file or directory
root@gramps-web:/opt/grampsweb# ls
docker-compose.yml  firstlogin.py  firstlogin.sh  media  nginx_proxy.conf
fordprefect480 commented 2 years ago

I ran firstlogin.sh to regenerate the letsencrypt.env file and here's what's in it:

root@gramps-web:/opt/grampsweb# cat letsencrypt.env
VIRTUAL_HOST=familytree.owen.nz
LETSENCRYPT_HOST=familytree.owen.nz
LETSENCRYPT_EMAIL=owen.symes@gmail.com
DavidMStraub commented 2 years ago

Hi, the memory usage is indeed a bit high. You could try with a larger droplet and/or changing the GUNICORN_NUM_WORKERS environment variable in /opt/grampsweb/docker-compose.yaml to 2.

Have you configured your subdomain correctly, i.e. added an A record pointing to your DO IP address?

It would be great if we could get this to work and update the documentation if we find something that doesn't work the way it's currently documented.

DavidMStraub commented 2 years ago

I ran firstlogin.sh to regenerate the letsencrypt.env file and here's what's in it:

Have you restarted the containers after generating the file? (Or even easier restarted the droplet.)

fordprefect480 commented 2 years ago

OK here's what I've done:

Will wait a few hours for the DNS change to propagate.

Were there no clues in the logs in my previous comment? I thought the "Unable to init server: Could not connect: Connection refused" was a bit suspicious but I don't know much about nginx.

fordprefect480 commented 2 years ago

Oh wow I'm in: image

So I can get to it only via my familytree.owen.nz address - browsing directly to https://157.230.247.174/ still doesn't seem to work.

Trouble is I'm not sure which one of the above fixed it. Perhaps it was the reboot after generating the letsencrypt.env file?

DavidMStraub commented 2 years ago

Great!

I just created a fresh droplet and I'm also in, but I had the issue that I SSHed in too early, the first run script was already shown, but docker compose was not installed yet, and so it failed to start and I had to manually do docker-compose up -d. So the installation script needs some tuning.

Please let me know if there are further issues.

fordprefect480 commented 2 years ago

I encountered the same issue the first time I created my droplet. I blew it away and created a new one, this time following your explicit instructions to wait a few minutes before SSHing in and it worked.

Not sure if we got any learnings out of this but I'll start filling out my tree now and keep you posted if I encounter any issues. Perhaps I'll even brush up on Lit and send some PRs your way.

Cheers mate