sourcegraph / sourcegraph-public-snapshot

Code AI platform with Code Search & Cody
https://sourcegraph.com
Other
10.1k stars 1.28k forks source link

Docker Compose v3.27.0 frontend intermittently fails health check at startup #21783

Closed DaedalusG closed 3 years ago

DaedalusG commented 3 years ago

Steps to reproduce:

  1. Clone Sourcegraph Docker compose
  2. Create and save the following script to run a while loop that stands up and pulls down Sourcegraph:
    
    #!/usr/bin/env bash

cd "$(dirname "${BASH_SOURCE[0]}")" set -euxo pipefail

finish() { echo "exiting..." exit 0 } trap finish SIGINT

catch_failure() { docker ps docker-compose logs finish }

cd docker-compose

while true do docker-compose up -d || catch_failure docker-compose down done

3. Run the script from the directory you created it in.

#### Expected behavior:
Sourcegraph should start and stop indefinitely without failure.

#### Actual behavior:
You will find that occasionally Sourcegraph will fail to start with the following error for a health check:

ERROR: for sourcegraph-frontend-0 Container "2bfbffb8a4dc" is unhealthy. ERROR: Encountered errors while bringing up the project.

This is likely the result of an over-fastidious health check.

Below is an output generated from running the script:
```bash
+ docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`

Creating network "docker-compose_sourcegraph" with the default driver
Creating jaeger                        ... done
Creating syntect-server                ... done
Creating sourcegraph-frontend-internal ... done
Creating precise-code-intel-worker     ... done
Creating gitserver-0                   ... done
Creating redis-store                   ... done
Creating codeintel-db                  ... done
Creating query-runner                  ... done
Creating caddy                         ... done
Creating pgsql                         ... done
Creating github-proxy                  ... done
Creating cadvisor                      ... done
Creating searcher-0                    ... done
Creating repo-updater                  ... done
Creating grafana                       ... done
Creating zoekt-webserver-0             ... done
Creating symbols-0                     ... done
Creating zoekt-indexserver-0           ... done
Creating codeinsights-db               ... done
Creating redis-cache                   ... done
Creating minio                         ... done
Creating prometheus                    ... done

ERROR: for sourcegraph-frontend-0  Container "2bfbffb8a4dc" is unhealthy.
ERROR: Encountered errors while bringing up the project.
+ catch_failure
+ docker ps
CONTAINER ID   IMAGE                                          COMMAND                  CREATED          STATUS                             PORTS                                                                                                                                              NAMES
eceb621bc4be   sourcegraph/prometheus:3.27.0                  "/bin/prom-wrapper"      39 seconds ago   Up 7 seconds                       0.0.0.0:9090->9090/tcp                                                                                                                             prometheus
21409cd05658   sourcegraph/redis-cache:3.27.0                 "/sbin/tini -- redis…"   39 seconds ago   Up 12 seconds                      6379/tcp                                                                                                                                           redis-cache
b506dbbb3cb2   sourcegraph/search-indexer:3.27.0              "/sbin/tini -- zoekt…"   39 seconds ago   Up 13 seconds                                                                                                                                                                         zoekt-indexserver-0
41d93b645b91   sourcegraph/codeinsights-db:3.27.0             "docker-entrypoint.s…"   39 seconds ago   Up 13 seconds                      5432/tcp                                                                                                                                           codeinsights-db
cd60abb6dfcc   sourcegraph/minio:3.27.0                       "/usr/bin/docker-ent…"   39 seconds ago   Up 12 seconds (healthy)            9000/tcp                                                                                                                                           minio
53e383acc73f   sourcegraph/grafana:3.27.0                     "/entry.sh"              39 seconds ago   Up 4 seconds                       3000/tcp, 0.0.0.0:3370->3370/tcp                                                                                                                   grafana
90345f822ef1   sourcegraph/cadvisor:3.27.0                    "/usr/bin/cadvisor -…"   39 seconds ago   Up 13 seconds (health: starting)   8080/tcp                                                                                                                                           cadvisor
96cdafc1a72d   sourcegraph/codeintel-db:3.27.0                "/postgres.sh"           39 seconds ago   Up 13 seconds (healthy)            5432/tcp                                                                                                                                           codeintel-db
eebc4be70cac   sourcegraph/github-proxy:3.27.0                "/sbin/tini -- /usr/…"   39 seconds ago   Up 13 seconds                                                                                                                                                                         github-proxy
49946e85d597   caddy/caddy:2.0.0-alpine                       "caddy run --config …"   39 seconds ago   Up 1 second                        0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 2019/tcp                                                                                                 caddy
bfa10b5617ff   sourcegraph/symbols:3.27.0                     "/sbin/tini -- /usr/…"   39 seconds ago   Up 14 seconds (healthy)            3184/tcp                                                                                                                                           symbols-0
b2f45a338beb   sourcegraph/indexed-searcher:3.27.0            "/sbin/tini -- /bin/…"   39 seconds ago   Up 12 seconds (healthy)                                                                                                                                                               zoekt-webserver-0
be1c6138491e   sourcegraph/repo-updater:3.27.0                "/sbin/tini -- /usr/…"   39 seconds ago   Up 14 seconds                                                                                                                                                                         repo-updater
cc1b32c6e6df   sourcegraph/searcher:3.27.0                    "/sbin/tini -- /usr/…"   39 seconds ago   Up 12 seconds (healthy)                                                                                                                                                               searcher-0
f2c72110f65a   sourcegraph/postgres-12.6:3.27.0               "/postgres.sh"           39 seconds ago   Up 12 seconds (healthy)            5432/tcp                                                                                                                                           pgsql
9beac3085511   sourcegraph/gitserver:3.27.0                   "/sbin/tini -- /usr/…"   39 seconds ago   Up 15 seconds                                                                                                                                                                         gitserver-0
13cf8cc61a2a   sourcegraph/redis-store:3.27.0                 "/sbin/tini -- redis…"   39 seconds ago   Up 16 seconds                      6379/tcp                                                                                                                                           redis-store
796ee20e599e   sourcegraph/query-runner:3.27.0                "/sbin/tini -- /usr/…"   39 seconds ago   Up 17 seconds                                                                                                                                                                         query-runner
718c5d81f66e   sourcegraph/precise-code-intel-worker:3.27.0   "/sbin/tini -- /usr/…"   39 seconds ago   Up 15 seconds (healthy)            3188/tcp                                                                                                                                           precise-code-intel-worker
d3022b73e410   sourcegraph/syntax-highlighter:3.27.0          "sh -c '/http-server…"   39 seconds ago   Up 15 seconds (healthy)            9238/tcp                                                                                                                                           syntect-server
2bfbffb8a4dc   sourcegraph/frontend:3.27.0                    "/sbin/tini -- /usr/…"   39 seconds ago   Up 8 seconds (healthy)                                                                                                                                                                sourcegraph-frontend-internal
c160b18ac07e   sourcegraph/jaeger-all-in-one:3.27.0           "/go/bin/all-in-one-…"   39 seconds ago   Up Less than a second              5775/udp, 0.0.0.0:5778->5778/tcp, 0.0.0.0:6831-6832->6831-6832/tcp, 0.0.0.0:14250->14250/tcp, 6831-6832/udp, 0.0.0.0:16686->16686/tcp, 14268/tcp   jaeger
+ docker-compose logs
Attaching to prometheus, redis-cache, zoekt-indexserver-0, codeinsights-db, minio, grafana, cadvisor, codeintel-db, github-proxy, caddy, symbols-0, zoekt-webserver-0, repo-updater, searcher-0, pgsql, gitserver-0, redis-store, query-runner, precise-code-intel-worker, syntect-server, sourcegraph-frontend-internal, jaeger
cadvisor                         | W0604 17:25:17.464738       1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor                         | W0604 17:25:17.466870       1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor                         | W0604 17:25:17.471999       1 manager.go:288] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
codeinsights-db                  | 
codeinsights-db                  | PostgreSQL Database directory appears to contain a database; Skipping initialization
codeinsights-db                  | 
codeinsights-db                  | 2021-06-04 17:25:12.905 UTC [1] LOG:  starting PostgreSQL 12.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.3.0) 9.3.0, 64-bit
codeinsights-db                  | 2021-06-04 17:25:12.905 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
codeinsights-db                  | 2021-06-04 17:25:12.905 UTC [1] LOG:  listening on IPv6 address "::", port 5432
codeinsights-db                  | 2021-06-04 17:25:12.907 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
codeinsights-db                  | 2021-06-04 17:25:12.928 UTC [21] LOG:  database system was shut down at 2021-06-04 17:24:28 UTC
codeinsights-db                  | 2021-06-04 17:25:12.932 UTC [1] LOG:  database system is ready to accept connections
codeinsights-db                  | 2021-06-04 17:25:12.933 UTC [27] LOG:  TimescaleDB background worker launcher connected to shared catalogs
codeintel-db                     | + '[' '!' -d /conf ']'
codeintel-db                     | + exit 0
codeintel-db                     | 2021-06-04 17:25:12.774 UTC [1] LOG:  starting PostgreSQL 12.6 (Debian 12.6-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
codeintel-db                     | 2021-06-04 17:25:12.774 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
codeintel-db                     | 2021-06-04 17:25:12.774 UTC [1] LOG:  listening on IPv6 address "::", port 5432
codeintel-db                     | 2021-06-04 17:25:12.776 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
codeintel-db                     | 2021-06-04 17:25:12.797 UTC [9] LOG:  database system was shut down at 2021-06-04 17:24:28 UTC
codeintel-db                     | 2021-06-04 17:25:12.803 UTC [1] LOG:  database system is ready to accept connections
caddy                            | {"level":"info","ts":1622827523.9821403,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy                            | {"level":"info","ts":1622827523.9835606,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
caddy                            | {"level":"info","ts":1622827523.9837923,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
caddy                            | 2021/06/04 17:25:23 [INFO][cache:0xc000639400] Started certificate maintenance routine
caddy                            | {"level":"info","ts":1622827523.9841385,"logger":"tls","msg":"cleaned up storage units"}
caddy                            | {"level":"info","ts":1622827523.9843764,"msg":"autosaved config","file":"/caddy-storage/config/caddy/autosave.json"}
caddy                            | {"level":"info","ts":1622827523.9843967,"msg":"serving initial configuration"}
jaeger                           | 2021/06/04 17:25:25 maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS
jaeger                           | {"level":"info","ts":1622827525.8322973,"caller":"flags/service.go:115","msg":"Mounting metrics handler on admin server","route":"/metrics"}
jaeger                           | {"level":"info","ts":1622827525.8327086,"caller":"flags/admin.go:115","msg":"Mounting health check on admin server","route":"/"}
jaeger                           | {"level":"info","ts":1622827525.8328166,"caller":"flags/admin.go:121","msg":"Starting admin HTTP server","http-port":14269}
jaeger                           | {"level":"info","ts":1622827525.8328335,"caller":"flags/admin.go:107","msg":"Admin server started","http-port":14269,"health-status":"unavailable"}
jaeger                           | {"level":"info","ts":1622827525.83637,"caller":"memory/factory.go:56","msg":"Memory storage initialized","configuration":{"MaxTraces":20000}}
jaeger                           | {"level":"info","ts":1622827525.8368444,"caller":"static/strategy_store.go:78","msg":"No sampling strategies provided, using defaults"}
jaeger                           | {"level":"info","ts":1622827525.846472,"caller":"server/thrift.go:72","msg":"Starting jaeger-collector TChannel server","port":14267}
jaeger                           | {"level":"warn","ts":1622827525.8465135,"caller":"server/thrift.go:73","msg":"TChannel has been deprecated and will be removed in a future release"}
jaeger                           | {"level":"info","ts":1622827525.8465512,"caller":"server/grpc.go:78","msg":"Starting jaeger-collector gRPC server","grpc-port":14250}
jaeger                           | {"level":"info","ts":1622827525.8465586,"caller":"server/http.go:46","msg":"Starting jaeger-collector HTTP server","http-host-port":":14268"}
jaeger                           | {"level":"info","ts":1622827525.8467062,"caller":"grpc/builder.go:66","msg":"Agent requested insecure grpc connection to collector(s)"}
jaeger                           | {"level":"info","ts":1622827525.846943,"caller":"grpc@v1.27.1/clientconn.go:106","msg":"parsed scheme: \"\"","system":"grpc","grpc_log":true}
jaeger                           | {"level":"info","ts":1622827525.8472676,"caller":"grpc@v1.27.1/clientconn.go:106","msg":"scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
jaeger                           | {"level":"info","ts":1622827525.847289,"caller":"passthrough/passthrough.go:48","msg":"ccResolverWrapper: sending update to cc: {[{127.0.0.1:14250  <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
jaeger                           | {"level":"info","ts":1622827525.8474507,"caller":"grpc@v1.27.1/clientconn.go:948","msg":"ClientConn switching balancer to \"round_robin\"","system":"grpc","grpc_log":true}
jaeger                           | {"level":"info","ts":1622827525.8479323,"caller":"all-in-one/main.go:203","msg":"Starting agent"}
jaeger                           | {"level":"info","ts":1622827525.8479908,"caller":"querysvc/query_service.go:133","msg":"Archive storage not created","reason":"archive storage not supported"}
jaeger                           | {"level":"info","ts":1622827525.8480005,"caller":"all-in-one/main.go:232","msg":"Archive storage not initialized"}
jaeger                           | {"level":"info","ts":1622827525.848353,"caller":"app/server.go:109","msg":"Query server started","port":16686}
jaeger                           | {"level":"info","ts":1622827525.8483918,"caller":"healthcheck/handler.go:128","msg":"Health Check state change","status":"ready"}
jaeger                           | {"level":"info","ts":1622827525.8484123,"caller":"app/server.go:146","msg":"Starting CMUX server","port":16686}
jaeger                           | {"level":"info","ts":1622827525.84891,"caller":"app/agent.go:69","msg":"Starting jaeger-agent HTTP server","http-port":5778}
jaeger                           | {"level":"info","ts":1622827525.8489993,"caller":"app/server.go:123","msg":"Starting HTTP server","port":16686}
jaeger                           | {"level":"info","ts":1622827525.8490415,"caller":"app/server.go:136","msg":"Starting GRPC server","port":16686}
jaeger                           | {"level":"info","ts":1622827525.8506296,"caller":"base/balancer.go:196","msg":"roundrobinPicker: newPicker called with info: {map[0xc00023e9d0:{{127.0.0.1:14250  <nil> 0 <nil>}}]}","system":"grpc","grpc_log":true}
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Starting Grafana" logger=server version=7.3.5 commit=11f305f88a branch=HEAD compiled=2020-12-09T16:02:57+0000
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config loaded from" logger=settings file=/sg_config_grafana/grafana.ini
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/sg_config_grafana/provisioning"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/sg_config_grafana/provisioning"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Path Provisioning" logger=settings path=/sg_config_grafana/provisioning
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="App mode production" logger=settings
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Starting DB migrations" logger=migrator
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Starting plugin search" logger=plugins
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="Registering plugin" logger=plugins id=input
grafana                          | t=2021-06-04T17:25:21+0000 lvl=eror msg="Failed to read plugin provisioning files from directory" logger=provisioning.plugins path=/sg_config_grafana/provisioning/plugins error="open /sg_config_grafana/provisioning/plugins: no such file or directory"
grafana                          | t=2021-06-04T17:25:21+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3370 protocol=http subUrl=/-/debug/grafana socket=
minio                            | 
minio                            |  You are running an older version of MinIO released 1 month ago 
minio                            |  Update: Run `mc admin update` 
minio                            | 
minio                            | 
minio                            | Endpoint: http://172.23.0.19:9000  http://127.0.0.1:9000 
minio                            | 
minio                            | Browser Access:
minio                            |    http://172.23.0.19:9000  http://127.0.0.1:9000
minio                            | 
minio                            | Object API (Amazon S3 compatible):
minio                            |    Go:         https://docs.min.io/docs/golang-client-quickstart-guide
minio                            |    Java:       https://docs.min.io/docs/java-client-quickstart-guide
minio                            |    Python:     https://docs.min.io/docs/python-client-quickstart-guide
minio                            |    JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
minio                            |    .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
minio                            | IAM initialization complete
pgsql                            | + '[' '!' -d /conf ']'
pgsql                            | + exit 0
pgsql                            | 2021-06-04 17:25:13.621 UTC [1] LOG:  starting PostgreSQL 12.6 (Debian 12.6-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
pgsql                            | 2021-06-04 17:25:13.621 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
pgsql                            | 2021-06-04 17:25:13.621 UTC [1] LOG:  listening on IPv6 address "::", port 5432
pgsql                            | 2021-06-04 17:25:13.623 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
pgsql                            | 2021-06-04 17:25:13.633 UTC [9] LOG:  database system was shut down at 2021-06-04 17:24:28 UTC
pgsql                            | 2021-06-04 17:25:13.639 UTC [1] LOG:  database system is ready to accept connections
redis-store                      | 8:C 04 Jun 2021 17:25:09.459 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-store                      | 8:C 04 Jun 2021 17:25:09.459 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=8, just started
redis-store                      | 8:C 04 Jun 2021 17:25:09.459 # Configuration loaded
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 * Running mode=standalone, port=6379.
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 # Server initialized
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 * Reading RDB preamble from AOF file...
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 * Reading the remaining AOF tail...
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 * DB loaded from append only file: 0.000 seconds
redis-store                      | 8:M 04 Jun 2021 17:25:09.461 * Ready to accept connections
redis-store                      | 8:M 04 Jun 2021 17:25:17.581 * Background append only file rewriting started by pid 12
redis-store                      | 8:M 04 Jun 2021 17:25:17.608 * AOF rewrite child asks to stop sending diffs.
redis-store                      | 12:C 04 Jun 2021 17:25:17.608 * Parent agreed to stop sending diffs. Finalizing AOF...
redis-store                      | 12:C 04 Jun 2021 17:25:17.608 * Concatenating 0.00 MB of AOF diff received from parent.
redis-store                      | 12:C 04 Jun 2021 17:25:17.608 * SYNC append only file rewrite performed
redis-store                      | 12:C 04 Jun 2021 17:25:17.609 * AOF rewrite: 0 MB of memory used by copy-on-write
redis-store                      | 8:M 04 Jun 2021 17:25:17.683 * Background AOF rewrite terminated with success
redis-store                      | 8:M 04 Jun 2021 17:25:17.683 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
redis-store                      | 8:M 04 Jun 2021 17:25:17.683 * Background AOF rewrite finished successfully
repo-updater                     | t=2021-06-04T17:25:24+0000 lvl=eror msg=source.list-repos error="1 error occurred:\n\t* Get \"http://localhost:3434/v1/list-repos\": dial tcp 127.0.0.1:3434: connect: connection refused\n\n"
repo-updater                     | t=2021-06-04T17:25:24+0000 lvl=warn msg="Marked record as errored" name=repo_sync_worker id=3776 err="fetching from code host src serve-git: 1 error occurred:\n\t* Get \"http://localhost:3434/v1/list-repos\": dial tcp 127.0.0.1:3434: connect: connection refused\n\n"
redis-cache                      | 8:C 04 Jun 2021 17:25:13.469 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-cache                      | 8:C 04 Jun 2021 17:25:13.469 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=8, just started
redis-cache                      | 8:C 04 Jun 2021 17:25:13.469 # Configuration loaded
redis-cache                      | 8:M 04 Jun 2021 17:25:13.470 * Running mode=standalone, port=6379.
redis-cache                      | 8:M 04 Jun 2021 17:25:13.470 # Server initialized
redis-cache                      | 8:M 04 Jun 2021 17:25:13.470 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis-cache                      | 8:M 04 Jun 2021 17:25:13.470 * DB loaded from disk: 0.000 seconds
redis-cache                      | 8:M 04 Jun 2021 17:25:13.470 * Ready to accept connections
redis-cache                      | 8:M 04 Jun 2021 17:25:17.582 * Background append only file rewriting started by pid 12
redis-cache                      | 8:M 04 Jun 2021 17:25:17.609 * AOF rewrite child asks to stop sending diffs.
redis-cache                      | 12:C 04 Jun 2021 17:25:17.609 * Parent agreed to stop sending diffs. Finalizing AOF...
redis-cache                      | 12:C 04 Jun 2021 17:25:17.609 * Concatenating 0.00 MB of AOF diff received from parent.
redis-cache                      | 12:C 04 Jun 2021 17:25:17.609 * SYNC append only file rewrite performed
redis-cache                      | 12:C 04 Jun 2021 17:25:17.609 * AOF rewrite: 0 MB of memory used by copy-on-write
redis-cache                      | 8:M 04 Jun 2021 17:25:17.681 * Background AOF rewrite terminated with success
redis-cache                      | 8:M 04 Jun 2021 17:25:17.681 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
redis-cache                      | 8:M 04 Jun 2021 17:25:17.681 * Background AOF rewrite finished successfully
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="waiting for alertmanager" cmd=prom-wrapper
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="running: [/prometheus.sh --web.listen-address=0.0.0.0:9092]" cmd=prom-wrapper
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="running: [/alertmanager.sh --config.file=/sg_config_prometheus/alertmanager.yml --web.route-prefix=/alertmanager --cluster.listen-address=]" cmd=prom-wrapper
prometheus                       | level=info ts=2021-06-04T17:25:18.498Z caller=main.go:216 msg="Starting Alertmanager" version="(version=0.21.0, branch=HEAD, revision=4c6c03ebfe21009c546e4d1e9b92c371d67c021d)"
prometheus                       | level=info ts=2021-06-04T17:25:18.498Z caller=main.go:217 build_context="(go=go1.14.4, user=root@dee35927357f, date=20200617-08:54:02)"
prometheus                       | level=info ts=2021-06-04T17:25:18.522Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/sg_config_prometheus/alertmanager.yml
prometheus                       | level=info ts=2021-06-04T17:25:18.523Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/sg_config_prometheus/alertmanager.yml
prometheus                       | level=info ts=2021-06-04T17:25:18.525Z caller=main.go:485 msg=Listening address=:9093
prometheus                       | level=info ts=2021-06-04T17:25:18.541Z caller=main.go:322 msg="No time or size retention was set so using the default time retention" duration=15d
prometheus                       | level=info ts=2021-06-04T17:25:18.542Z caller=main.go:360 msg="Starting Prometheus" version="(version=2.23.0, branch=HEAD, revision=26d89b4b0776fe4cd5a3656dfa520f119a375273)"
prometheus                       | level=info ts=2021-06-04T17:25:18.542Z caller=main.go:365 build_context="(go=go1.15.5, user=root@37609b3a0a21, date=20201126-10:56:17)"
prometheus                       | level=info ts=2021-06-04T17:25:18.542Z caller=main.go:366 host_details="(Linux 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 eceb621bc4be (none))"
prometheus                       | level=info ts=2021-06-04T17:25:18.542Z caller=main.go:367 fd_limits="(soft=1048576, hard=1048576)"
prometheus                       | level=info ts=2021-06-04T17:25:18.542Z caller=main.go:368 vm_limits="(soft=unlimited, hard=unlimited)"
prometheus                       | level=info ts=2021-06-04T17:25:18.544Z caller=web.go:528 component=web msg="Start listening for connections" address=0.0.0.0:9092
prometheus                       | level=info ts=2021-06-04T17:25:18.544Z caller=main.go:722 msg="Starting TSDB ..."
prometheus                       | level=info ts=2021-06-04T17:25:18.545Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621490400000 maxt=1621555200000 ulid=01F66E969DWAZQD8PNF746DCZ0
prometheus                       | level=info ts=2021-06-04T17:25:18.545Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621555200000 maxt=1621620000000 ulid=01F68BMY1AGSWV8WBP7B3625N9
prometheus                       | level=info ts=2021-06-04T17:25:18.545Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621620000000 maxt=1621684800000 ulid=01F6A9EC64EP0AA0EHSGK32231
prometheus                       | level=info ts=2021-06-04T17:25:18.545Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621684800000 maxt=1621749600000 ulid=01F6C90848GX5TXFY02JV3EY87
prometheus                       | level=info ts=2021-06-04T17:25:18.545Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621749600000 maxt=1621814400000 ulid=01F6E77Z6SVFYHR97PSE763WZ0
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621814400000 maxt=1621879200000 ulid=01F6G2VD11GKCZ7WFWWN9FCJPR
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621879200000 maxt=1621944000000 ulid=01F6J0PJVN26DPYJ2BVS87K85E
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1621944000000 maxt=1622008800000 ulid=01F6KZ80CMWQ0FGE6GYWAXCKZG
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622008800000 maxt=1622073600000 ulid=01F6P1VFY8TC5GZ8C0MD58V7GA
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622073600000 maxt=1622138400000 ulid=01F6QT1K5QMDPEDMWTYMV2VKTW
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622138400000 maxt=1622203200000 ulid=01F6SQTQ788VB4G23AF8W6R577
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622203200000 maxt=1622268000000 ulid=01F6VT2BGCY3N5RRVBX7WETFDZ
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622268000000 maxt=1622332800000 ulid=01F6XKDT7S07Z8JZ1FCSVDAQPP
prometheus                       | level=info ts=2021-06-04T17:25:18.546Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622649600000 maxt=1622656800000 ulid=01F771J353BJK87FAAEHDPNZYJ
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622332800000 maxt=1622361600000 ulid=01F778DHTH5BWTSGF398AENTN5
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622656800000 maxt=1622721600000 ulid=01F7966XX6DP1CKGBSB3THAG4D
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622721600000 maxt=1622786400000 ulid=01F7B8HR5FT87BJWNSPD2VF5VH
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622808000000 maxt=1622815200000 ulid=01F7BVJNBYPM4C72TJ6N4PJ444
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622786400000 maxt=1622808000000 ulid=01F7BVJNJKA2HHXZN83XMRVHP0
prometheus                       | level=info ts=2021-06-04T17:25:18.547Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1622815200000 maxt=1622822400000 ulid=01F7BZFM3H6NM28V0TPVWQWNVH
prometheus                       | level=info ts=2021-06-04T17:25:18.577Z caller=head.go:645 component=tsdb msg="Replaying on-disk memory mappable chunks if any"
prometheus                       | level=info ts=2021-06-04T17:25:18.577Z caller=head.go:659 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=162.608µs
prometheus                       | level=info ts=2021-06-04T17:25:18.577Z caller=head.go:665 component=tsdb msg="Replaying WAL, this may take a while"
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="detected alertmanager ready" cmd=prom-wrapper
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="initializing configuration" cmd=prom-wrapper
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="waiting for frontend" cmd=prom-wrapper logger=config-subscriber url=http://sourcegraph-frontend-internal:3090
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="serving endpoints and reverse proxy" cmd=prom-wrapper
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="detected frontend ready, loading initial configuration" cmd=prom-wrapper logger=config-subscriber
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="applying configuration diffs" cmd=prom-wrapper logger=config-subscriber diffs="[{Type:alerts change:0xa66ac0}]"
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=info msg="applying changes for \"alerts\" diff" cmd=prom-wrapper logger=config-subscriber
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="reloading with new configuration" cmd=prom-wrapper logger=config-subscriber
prometheus                       | level=info ts=2021-06-04T17:25:18.675Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/sg_config_prometheus/alertmanager.yml
prometheus                       | level=info ts=2021-06-04T17:25:18.675Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/sg_config_prometheus/alertmanager.yml
prometheus                       | level=warn ts=2021-06-04T17:25:18.676Z caller=head.go:632 component=tsdb msg="Unknown series references" count=24430
prometheus                       | level=info ts=2021-06-04T17:25:18.676Z caller=head.go:691 component=tsdb msg="WAL checkpoint loaded"
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="configuration diffs applied" cmd=prom-wrapper logger=config-subscriber diffs="[{Type:alerts change:0xa66ac0}]" problems=[]
prometheus                       | t=2021-06-04T17:25:18+0000 lvl=dbug msg="config update contained no relevant changes - ignoring" cmd=prom-wrapper logger=config-subscriber
prometheus                       | level=info ts=2021-06-04T17:25:18.683Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=329 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.684Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=330 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.691Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=331 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.692Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=332 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.693Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=333 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.700Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=334 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.706Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=335 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.707Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=336 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.707Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=337 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.708Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=338 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.708Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=339 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.708Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=340 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.713Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=341 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.714Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=342 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.714Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=343 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.715Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=344 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.715Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=345 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.716Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=346 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.716Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=347 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.717Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=348 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.717Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=349 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.718Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=350 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.730Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=351 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.731Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=352 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.732Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=353 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.747Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=354 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.753Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=355 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.754Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=356 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.755Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=357 maxSegment=357
prometheus                       | level=info ts=2021-06-04T17:25:18.755Z caller=head.go:722 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=120.326401ms wal_replay_duration=78.512373ms total_replay_duration=199.039911ms
prometheus                       | level=info ts=2021-06-04T17:25:18.772Z caller=main.go:742 fs_type=EXT4_SUPER_MAGIC
prometheus                       | level=info ts=2021-06-04T17:25:18.772Z caller=main.go:745 msg="TSDB started"
prometheus                       | level=info ts=2021-06-04T17:25:18.772Z caller=main.go:871 msg="Loading configuration file" filename=/sg_config_prometheus/prometheus.yml
prometheus                       | level=info ts=2021-06-04T17:25:18.919Z caller=main.go:902 msg="Completed loading of configuration file" filename=/sg_config_prometheus/prometheus.yml totalDuration=146.938395ms remote_storage=1.642µs web_handler=351ns query_engine=1.144µs scrape=244.873µs scrape_sd=196.401µs notify=36.123µs notify_sd=20.147µs rules=146.030594ms
prometheus                       | level=info ts=2021-06-04T17:25:18.919Z caller=main.go:694 msg="Server is ready to receive web requests."
sourcegraph-frontend-internal    | ERROR: failed to connect to frontend database: DB not available: failed to connect to `host=pgsql user=sg database=sg`: dial error (dial tcp 172.23.0.23:5432: connect: no route to host)
sourcegraph-frontend-internal    | Jaeger URL from env  http://jaeger:16686
sourcegraph-frontend-internal    | ✱ Sourcegraph is ready at: https://breaksourcegraph.com
syntect-server                   | 2021/06/04 17:25:10 worker command: env ROCKET_PORT={{.Port}} /syntect_server
syntect-server                   | 2021/06/04 17:25:10 worker 17: started on port 39717
syntect-server                   | 2021/06/04 17:25:10 worker 18: started on port 45933
syntect-server                   | 2021/06/04 17:25:10 worker 19: started on port 43627
syntect-server                   | 2021/06/04 17:25:10 worker 20: started on port 42677
syntect-server                   | 2021/06/04 17:25:10 worker 18: Configured for production.
syntect-server                   | 2021/06/04 17:25:10 worker 19: Configured for production.
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => address: 0.0.0.0
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => port: 43627
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => log: critical
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => workers: 8
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => secret key: provided
syntect-server                   | 2021/06/04 17:25:10 worker 17: Configured for production.
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => address: 0.0.0.0
syntect-server                   | 2021/06/04 17:25:10 worker 20: Configured for production.
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => address: 0.0.0.0
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => port: 42677
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => log: critical
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => workers: 8
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => secret key: provided
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => limits: forms = 32KiB, json* = 10MiB
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => keep-alive: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 20:     => tls: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => port: 45933
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => log: critical
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => workers: 8
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => secret key: provided
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => limits: forms = 32KiB, json* = 10MiB
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => keep-alive: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 18:     => tls: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => address: 0.0.0.0
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => port: 39717
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => log: critical
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => workers: 8
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => secret key: provided
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => limits: forms = 32KiB, json* = 10MiB
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => keep-alive: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 17:     => tls: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => limits: forms = 32KiB, json* = 10MiB
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => keep-alive: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 19:     => tls: disabled
syntect-server                   | 2021/06/04 17:25:10 worker 17: Rocket has launched from http://0.0.0.0:39717
syntect-server                   | 2021/06/04 17:25:10 worker 20: Rocket has launched from http://0.0.0.0:42677
syntect-server                   | 2021/06/04 17:25:10 worker 19: Rocket has launched from http://0.0.0.0:43627
syntect-server                   | 2021/06/04 17:25:10 worker 18: Rocket has launched from http://0.0.0.0:45933
syntect-server                   | 2021/06/04 17:25:15 request /health http://127.0.0.1:39717
syntect-server                   | 2021/06/04 17:25:20 request /health http://127.0.0.1:39717
syntect-server                   | 2021/06/04 17:25:25 request /health http://127.0.0.1:39717
zoekt-webserver-0                | 2021/06/04 17:25:13 loading 6 shards
zoekt-webserver-0                | 2021/06/04 17:25:13 listening on :6070
+ finish
+ echo exiting...
exiting...
+ exit 0

unhealthy_sg_frontend.txt

DaedalusG commented 3 years ago

From the errors logged it appears to be the case that sourcegraph-frontend-internal attempts to connect to the database before the pgsql container has started

sourcegraph-frontend-internal | ERROR: failed to connect to frontend database: DB not available: failed to connect tohost=pgsql user=sg database=sg: dial error (dial tcp 172.23.0.23:5432: connect: no route to host)

daxmc99 commented 3 years ago

https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml#L147 This has been fixed on 3.28 IF you cannot upgrade you can use a docker-compose.override.yml in the meantime