When running VinDr Lab with docker, i get this error: "502 Bad Gateway nginx/1.25.1" when trying to access http://localhost:8080/auth
Log from vinlab-nginx:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/15 11:37:44 [emerg] 1#1: host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
nginx: [emerg] host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/15 11:37:45 [emerg] 1#1: host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
nginx: [emerg] host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/15 11:37:46 [emerg] 1#1: host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
nginx: [emerg] host not found in upstream "vinlab-dashboard" in /etc/nginx/conf.d/default.conf:15
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/15 11:37:47 [notice] 1#1: using the "epoll" event method
2023/06/15 11:37:47 [notice] 1#1: nginx/1.25.1
2023/06/15 11:37:47 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2023/06/15 11:37:47 [notice] 1#1: OS: Linux 5.19.0-43-generic
2023/06/15 11:37:47 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/06/15 11:37:47 [notice] 1#1: start worker processes
2023/06/15 11:37:47 [notice] 1#1: start worker process 28
2023/06/15 11:37:47 [notice] 1#1: start worker process 29
2023/06/15 11:37:47 [notice] 1#1: start worker process 30
2023/06/15 11:37:47 [notice] 1#1: start worker process 31
2023/06/15 11:37:47 [notice] 1#1: start worker process 32
2023/06/15 11:37:47 [notice] 1#1: start worker process 33
2023/06/15 11:37:47 [notice] 1#1: start worker process 34
2023/06/15 11:37:47 [notice] 1#1: start worker process 35
2023/06/15 11:39:18 [error] 30#30: 2 connect() failed (113: No route to host) while connecting to upstream, client: 172.22.0.1, server: 0.0.0.0, request: "GET /auth HTTP/1.1", upstream: "http://172.22.0.3:9090/auth", host: "localhost:8080"
172.22.0.1 - - [15/Jun/2023:11:39:18 +0000] "GET /auth HTTP/1.1" 502 559 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36" "-"
172.22.0.1 - - [15/Jun/2023:11:39:18 +0000] "GET /dashboard HTTP/1.1" 200 2775 "http://localhost:8080/auth" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36" "-"
2023/06/15 11:40:12 [error] 32#32: 6 connect() failed (113: No route to host) while connecting to upstream, client: 172.22.0.1, server: 0.0.0.0, request: "GET /auth HTTP/1.1", upstream: "http://172.22.0.3:9090/auth", host: "localhost:8080"
172.22.0.1 - - [15/Jun/2023:11:40:12 +0000] "GET /auth HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0" "-"
172.22.0.1 - - [15/Jun/2023:11:40:12 +0000] "GET /favicon.ico HTTP/1.1" 301 169 "http://localhost:8080/auth" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0" "-"
172.22.0.1 - - [15/Jun/2023:11:40:12 +0000] "GET /dashboard HTTP/1.1" 200 2775 "http://localhost:8080/auth" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/114.0" "-"
Log from vinlab-keycloak:
Added 'admin' to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json', restart server to load user
-Djboss.http.port=9090 -Dkeycloak.profile.feature.upload_scripts=enabled
11:37:47,331 INFO [org.jboss.modules] (main) JBoss Modules version 1.10.1.Final
java.lang.IllegalStateException: WFLYSRV0126: Could not create server content directory: /opt/jboss/keycloak/standalone/data/content
at org.jboss.as.server@12.0.3.Final//org.jboss.as.server.ServerEnvironment.(ServerEnvironment.java:482)
at org.jboss.as.server@12.0.3.Final//org.jboss.as.server.Main.determineEnvironment(Main.java:388)
at org.jboss.as.server@12.0.3.Final//org.jboss.as.server.Main.main(Main.java:96)
11:37:48,265 FATAL [org.jboss.as.server] (main) WFLYSRV0239: Aborting with exit code 1
at org.jboss.modules.Module.run(Module.java:352)
at org.jboss.modules.Module.run(Module.java:320)
at org.jboss.modules.Main.main(Main.java:617)
rUser with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
INFO version[7.9.2], pid[7], build[default/docker/d34da0ea4a966c4e49417f2da2f244e3e97b4e6e/2020-09-23T00:45:33.626720Z], OS[Linux/5.19.0-43-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15/15+36] | type=server timestamp=2023-06-15T11:37:50,285Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO JVM home [/usr/share/elasticsearch/jdk] | type=server timestamp=2023-06-15T11:37:50,297Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-4119709204181894045, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -XX:MaxDirectMemorySize=268435456, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true] | type=server timestamp=2023-06-15T11:37:50,305Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [aggs-matrix-stats] | type=server timestamp=2023-06-15T11:37:58,161Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [analysis-common] | type=server timestamp=2023-06-15T11:37:58,161Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [constant-keyword] | type=server timestamp=2023-06-15T11:37:58,161Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [flattened] | type=server timestamp=2023-06-15T11:37:58,162Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [frozen-indices] | type=server timestamp=2023-06-15T11:37:58,162Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [ingest-common] | type=server timestamp=2023-06-15T11:37:58,162Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [ingest-geoip] | type=server timestamp=2023-06-15T11:37:58,162Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [ingest-user-agent] | type=server timestamp=2023-06-15T11:37:58,163Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [kibana] | type=server timestamp=2023-06-15T11:37:58,163Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [lang-expression] | type=server timestamp=2023-06-15T11:37:58,163Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [lang-mustache] | type=server timestamp=2023-06-15T11:37:58,163Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [lang-painless] | type=server timestamp=2023-06-15T11:37:58,164Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [mapper-extras] | type=server timestamp=2023-06-15T11:37:58,164Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [parent-join] | type=server timestamp=2023-06-15T11:37:58,167Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [percolator] | type=server timestamp=2023-06-15T11:37:58,168Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [rank-eval] | type=server timestamp=2023-06-15T11:37:58,168Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [reindex] | type=server timestamp=2023-06-15T11:37:58,168Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [repository-url] | type=server timestamp=2023-06-15T11:37:58,168Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [search-business-rules] | type=server timestamp=2023-06-15T11:37:58,169Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [searchable-snapshots] | type=server timestamp=2023-06-15T11:37:58,169Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [spatial] | type=server timestamp=2023-06-15T11:37:58,169Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [tasks] | type=server timestamp=2023-06-15T11:37:58,169Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [transform] | type=server timestamp=2023-06-15T11:37:58,170Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [transport-netty4] | type=server timestamp=2023-06-15T11:37:58,170Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [vectors] | type=server timestamp=2023-06-15T11:37:58,170Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [wildcard] | type=server timestamp=2023-06-15T11:37:58,170Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-analytics] | type=server timestamp=2023-06-15T11:37:58,171Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-async] | type=server timestamp=2023-06-15T11:37:58,171Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-async-search] | type=server timestamp=2023-06-15T11:37:58,171Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-autoscaling] | type=server timestamp=2023-06-15T11:37:58,171Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-ccr] | type=server timestamp=2023-06-15T11:37:58,171Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-core] | type=server timestamp=2023-06-15T11:37:58,172Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-data-streams] | type=server timestamp=2023-06-15T11:37:58,172Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-deprecation] | type=server timestamp=2023-06-15T11:37:58,173Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-enrich] | type=server timestamp=2023-06-15T11:37:58,173Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-eql] | type=server timestamp=2023-06-15T11:37:58,173Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-graph] | type=server timestamp=2023-06-15T11:37:58,173Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-identity-provider] | type=server timestamp=2023-06-15T11:37:58,174Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-ilm] | type=server timestamp=2023-06-15T11:37:58,174Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-logstash] | type=server timestamp=2023-06-15T11:37:58,174Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-ml] | type=server timestamp=2023-06-15T11:37:58,174Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-monitoring] | type=server timestamp=2023-06-15T11:37:58,174Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-ql] | type=server timestamp=2023-06-15T11:37:58,175Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-rollup] | type=server timestamp=2023-06-15T11:37:58,175Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-security] | type=server timestamp=2023-06-15T11:37:58,175Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-sql] | type=server timestamp=2023-06-15T11:37:58,175Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-stack] | type=server timestamp=2023-06-15T11:37:58,176Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-voting-only-node] | type=server timestamp=2023-06-15T11:37:58,176Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO loaded module [x-pack-watcher] | type=server timestamp=2023-06-15T11:37:58,176Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO no plugins loaded | type=server timestamp=2023-06-15T11:37:58,184Z component=o.e.p.PluginsService cluster.name=docker-cluster node.name=65de48928010
INFO using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/nvme0n1p4_crypt)]], net usable_space [25.8gb], net total_space [232.9gb], types [ext4] | type=server timestamp=2023-06-15T11:37:58,277Z component=o.e.e.NodeEnvironment cluster.name=docker-cluster node.name=65de48928010
INFO heap size [512mb], compressed ordinary object pointers [true] | type=server timestamp=2023-06-15T11:37:58,277Z component=o.e.e.NodeEnvironment cluster.name=docker-cluster node.name=65de48928010
INFO node name [65de48928010], node ID [Ohjn9LONSQKRJPVSAs0mFQ], cluster name [docker-cluster] | type=server timestamp=2023-06-15T11:37:58,560Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO [controller/232] [Main.cc@114] controller (64 bit): Version 7.9.2 (Build 6a60f0cf2dd5a5) Copyright (c) 2020 Elasticsearch BV | type=server timestamp=2023-06-15T11:38:05,996Z component=o.e.x.m.p.l.CppLogMessageHandler cluster.name=docker-cluster node.name=65de48928010
{"type": "server", "timestamp": "2023-06-15T11:38:06,896Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "docker-cluster", "node.name": "65de48928010", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
INFO creating NettyAllocator with the following configs: [name=unpooled, factors={es.unsafe.use_unpooled_allocator=false, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=512mb}] | type=server timestamp=2023-06-15T11:38:08,576Z component=o.e.t.NettyAllocator cluster.name=docker-cluster node.name=65de48928010
INFO using discovery type [single-node] and seed hosts providers [settings] | type=server timestamp=2023-06-15T11:38:08,682Z component=o.e.d.DiscoveryModule cluster.name=docker-cluster node.name=65de48928010
WARN gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually | type=server timestamp=2023-06-15T11:38:09,531Z component=o.e.g.DanglingIndicesState cluster.name=docker-cluster node.name=65de48928010
INFO initialized | type=server timestamp=2023-06-15T11:38:10,515Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO starting ... | type=server timestamp=2023-06-15T11:38:10,517Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010
INFO publish_address {172.22.0.4:9300}, bound_addresses {0.0.0.0:9300} | type=server timestamp=2023-06-15T11:38:10,994Z component=o.e.t.TransportService cluster.name=docker-cluster node.name=65de48928010
WARN max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] | type=server timestamp=2023-06-15T11:38:11,782Z component=o.e.b.BootstrapChecks cluster.name=docker-cluster node.name=65de48928010
INFO cluster UUID [a9OEPj46R5-Nm-d7aHf79Q] | type=server timestamp=2023-06-15T11:38:11,786Z component=o.e.c.c.Coordinator cluster.name=docker-cluster node.name=65de48928010
INFO elected-as-master ([1] nodes joined)[{65de48928010}{Ohjn9LONSQKRJPVSAs0mFQ}{wL9SdK5CRwGRZrPCRbXxmA}{172.22.0.4}{172.22.0.4:9300}{dilmrt}{ml.machine_memory=16081444864, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTERTASK, _FINISHELECTION], term: 4, version: 43, delta: master node changed {previous [], current [{65de48928010}{Ohjn9LONSQKRJPVSAs0mFQ}{wL9SdK5CRwGRZrPCRbXxmA}{172.22.0.4}{172.22.0.4:9300}{dilmrt}{ml.machine_memory=16081444864, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]} | type=server timestamp=2023-06-15T11:38:12,026Z component=o.e.c.s.MasterService cluster.name=docker-cluster node.name=65de48928010
INFO master node changed {previous [], current [{65de48928010}{Ohjn9LONSQKRJPVSAs0mFQ}{wL9SdK5CRwGRZrPCRbXxmA}{172.22.0.4}{172.22.0.4:9300}{dilmrt}{ml.machine_memory=16081444864, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 4, version: 43, reason: Publication{term=4, version=43} | type=server timestamp=2023-06-15T11:38:12,242Z component=o.e.c.s.ClusterApplierService cluster.name=docker-cluster node.name=65de48928010
INFO publish_address {172.22.0.4:9200}, bound_addresses {0.0.0.0:9200} | type=server timestamp=2023-06-15T11:38:12,453Z component=o.e.h.AbstractHttpServerTransport cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
INFO started | type=server timestamp=2023-06-15T11:38:12,455Z component=o.e.n.Node cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
INFO license [6b0f0d4e-d0f3-440d-83a0-2205fbc3cb41] mode [basic] - valid | type=server timestamp=2023-06-15T11:38:13,032Z component=o.e.l.LicenseService cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
INFO Active license is now [BASIC]; Security is disabled | type=server timestamp=2023-06-15T11:38:13,035Z component=o.e.x.s.s.SecurityStatusChangeListener cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
INFO recovered [0] indices into cluster_state | type=server timestamp=2023-06-15T11:38:13,050Z component=o.e.g.GatewayService cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
INFO low disk watermark [85%] exceeded on [Ohjn9LONSQKRJPVSAs0mFQ][65de48928010][/usr/share/elasticsearch/data/nodes/0] free: 25.7gb[11%], replicas will not be assigned to this node | type=server timestamp=2023-06-15T11:38:42,402Z component=o.e.c.r.a.DiskThresholdMonitor cluster.name=docker-cluster node.name=65de48928010 cluster.uuid=a9OEPj46R5-Nm-d7aHf79Q node.id=Ohjn9LONSQKRJPVSAs0mFQ
Log from vinlab-rqlite:
[rqlited] 2023/06/15 11:37:45 rqlited starting, version v5.10.2, commit 125ae547879fc5a5b2cbb672b8f5011c171e5907, branch master
[rqlited] 2023/06/15 11:37:45 go1.15, target architecture is amd64, operating system target is linux
[rqlited] 2023/06/15 11:37:45 launch command: rqlited -http-addr 0.0.0.0:4001 -raft-addr 0.0.0.0:4002 /rqlite/file/data
[rqlited] 2023/06/15 11:37:45 no preexisting node state detected in /rqlite/file/data, node may be bootstrapping
[rqlited] 2023/06/15 11:37:45 no join addresses set
[store] 2023/06/15 11:37:45 opening store with node ID 0.0.0.0:4002
[store] 2023/06/15 11:37:45 ensuring directory at /rqlite/file/data exists
[store] 2023/06/15 11:37:45 0 preexisting snapshots present
[store] 2023/06/15 11:37:45 first log index: 0, last log index: 0, last command log index: 0:
2023-06-15T11:37:45.419Z [INFO] raft: initial configuration: index=0 servers=[]
[store] 2023/06/15 11:37:45 executing new cluster bootstrap
2023-06-15T11:37:45.419Z [INFO] raft: entering follower state: follower="Node at [::]:4002 [Follower]" leader=
2023-06-15T11:37:46.743Z [WARN] raft: heartbeat timeout reached, starting election: last-leader=
2023-06-15T11:37:46.743Z [INFO] raft: entering candidate state: node="Node at [::]:4002 [Candidate]" term=2
2023-06-15T11:37:46.749Z [INFO] raft: election won: tally=1
2023-06-15T11:37:46.749Z [INFO] raft: entering leader state: leader="Node at [::]:4002 [Leader]"
[store] 2023/06/15 11:37:46 waiting for up to 2m0s for application of initial logs
[rqlited] 2023/06/15 11:37:46 node is ready
Log from vinlab-redis:
1:C 15 Jun 2023 11:37:44.973 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 15 Jun 2023 11:37:44.973 # Redis version=7.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 15 Jun 2023 11:37:44.973 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 15 Jun 2023 11:37:44.973 monotonic clock: POSIX clock_gettime
1:M 15 Jun 2023 11:37:44.974 Running mode=standalone, port=6379.
1:M 15 Jun 2023 11:37:44.974 # Server initialized
1:M 15 Jun 2023 11:37:44.974 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 15 Jun 2023 11:37:44.974 * Ready to accept connections
Log from vinlab-orthanc:
Generating random hostid in /etc/hostid: ee2a3639
Startup command: exec "Orthanc /tmp/orthanc.json"
W0615 11:37:45.560963 main.cpp:2034] Orthanc version: 1.12.0
W0615 11:37:45.561204 OrthancConfiguration.cpp:57] Reading the configuration from: "/tmp/orthanc.json"
W0615 11:37:45.611140 main.cpp:911] Loading plugin(s) from: /run/orthanc/plugins
E0615 11:37:45.611199 PluginsManager.cpp:234] Inexistent path to plugins: /run/orthanc/plugins
W0615 11:37:45.611207 main.cpp:911] Loading plugin(s) from: /usr/share/orthanc/plugins
W0615 11:37:45.632410 PluginsManager.cpp:261] Registering plugin 'gdcm' (version 1.5)
W0615 11:37:45.632671 PluginsManager.cpp:157] Orthanc will use GDCM to decode transfer syntax: 1.2.840.10008.1.2.4.90
W0615 11:37:45.632686 PluginsManager.cpp:157] Orthanc will use GDCM to decode transfer syntax: 1.2.840.10008.1.2.4.91
W0615 11:37:45.632691 PluginsManager.cpp:157] Orthanc will use GDCM to decode transfer syntax: 1.2.840.10008.1.2.4.92
W0615 11:37:45.632695 PluginsManager.cpp:157] Orthanc will use GDCM to decode transfer syntax: 1.2.840.10008.1.2.4.93
W0615 11:37:45.632702 PluginsManager.cpp:157] Throttling GDCM to 4 concurrent thread(s)
W0615 11:37:45.632718 PluginsManager.cpp:157] Version of GDCM: 3.0.10
W0615 11:37:45.633194 PluginsManager.cpp:261] Registering plugin 'orthanc-explorer-2' (version 0.9.3)
W0615 11:37:45.633465 PluginsManager.cpp:157] Root URI to the Orthanc-Explorer 2 application: /ui/
W0615 11:37:45.634000 PluginsManager.cpp:261] Registering plugin 'dicom-web' (version 1.13)
W0615 11:37:45.634149 PluginsManager.cpp:157] URI to the DICOMweb REST API: /dicom-web/
W0615 11:37:45.634448 PluginsManager.cpp:157] DICOMWeb PublicRoot: /dicom-web/
W0615 11:37:45.634458 PluginsManager.cpp:157] URI to the WADO-URI API: /wado
W0615 11:37:45.634482 OrthancInitialization.cpp:420] SQLite index directory: "/var/lib/orthanc/db"
W0615 11:37:45.634786 OrthancInitialization.cpp:519] Storage directory: "/var/lib/orthanc/db"
W0615 11:37:45.647339 HttpClient.cpp:1194] HTTPS will use the CA certificates from this file: /etc/ssl/certs/ca-certificates.crt
W0615 11:37:45.647990 LuaContext.cpp:94] Lua says: Lua toolbox installed
W0615 11:37:45.648224 LuaContext.cpp:94] Lua says: Lua toolbox installed
W0615 11:37:45.648482 ServerContext.cpp:515] Disk compression is disabled
W0615 11:37:45.648493 ServerIndex.cpp:381] No limit on the number of stored patients
W0615 11:37:45.648497 ServerIndex.cpp:401] No limit on the size of the storage area
W0615 11:37:45.648501 ServerIndex.cpp:420] Maximum Storage mode: Recycle
W0615 11:37:45.648976 JobsEngine.cpp:272] The jobs engine has started with 2 threads
W0615 11:37:45.649295 main.cpp:1317] DICOM server listening with AET ORTHANC on port: 4242
W0615 11:37:45.649319 HttpServer.cpp:2036] HTTP compression is enabled
W0615 11:37:45.649325 main.cpp:1048] ====> Remote access is enabled while user authentication is explicitly disabled, your setup is POSSIBLY INSECURE <====
W0615 11:37:45.649331 main.cpp:1172] Remote LUA script execution is disabled
W0615 11:37:45.649334 main.cpp:1184] REST API can not write to the file system.
W0615 11:37:45.651143 HttpServer.cpp:1794] HTTP server listening on port: 8042 (HTTPS encryption is disabled, remote access is allowed)
W0615 11:37:45.651186 main.cpp:923] Orthanc has started
Log from vinlab-idgen:
2023/06/15 11:37:45 [INFO] [main.go:63] API is running in [development] mode
Creating single instance now.
Cannot skip TLS
Log from vinlab-api:
2023/06/15 11:37:46 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:46 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:47 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:47 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:48 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:48 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:49 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:49 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:51 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:51 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:54 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:54 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:37:58 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:37:58 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:38:05 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:38:05 [INFO] [main.go:97] [http://vinlab-es:9200]
panic: Cannot connect to ES
goroutine 1 [running]:
main.main()
/opt/app/main.go:104 +0x20f6
2023/06/15 11:38:19 [INFO] [main.go:76] API is running in [production] mode
2023/06/15 11:38:19 [INFO] [main.go:97] [http://vinlab-es:9200]
2023/06/15 11:38:19 [ERROR] [main.main:/opt/app/main.go:134] 404 Not Found ERROR putting template es_template_annotation
2023/06/15 11:38:19 [ERROR] [main.main:/opt/app/main.go:135] 400 Bad Request ERROR putting template es_template_annotation
2023/06/15 11:38:19 [INFO] [main.go:145] vinlab-minio:9000
Log from vinlab-dashboard:
vindr-labeling-studylist@0.1.0 build:prod /app
set "GENERATE_SOURCEMAP=false" && craco build prod
Creating an optimized production build...
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Why you should do it regularly:
https://github.com/browserslist/browserslist#browsers-data-updating
Compiled successfully.
File sizes after gzip:
358.4 KB build/static/js/2.7245795b.chunk.js
41.76 KB build/static/css/2.6007f011.chunk.css
40.84 KB build/static/js/main.40ee429d.chunk.js
5.93 KB build/static/css/main.8f93c4f3.chunk.css
796 B build/static/js/runtime-main.8580f7f4.js
The project was built assuming it is hosted at /dashboard/.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
Find out more about deployment here:
https://cra.link/deployment
Hi
When running VinDr Lab with docker, i get this error: "502 Bad Gateway nginx/1.25.1" when trying to access http://localhost:8080/auth
Log from vinlab-nginx:
Log from vinlab-keycloak:
Log from vinlab-minio:
Log from vinlab-es:
Log from vinlab-rqlite:
Log from vinlab-redis:
Log from vinlab-orthanc:
Log from vinlab-idgen:
Log from vinlab-api:
Log from vinlab-dashboard:
Log from vinlab-viewer:
Log from vinlab-uploader: