Open krono-i2 opened 2 years ago
I suppose there is a better way to do it, but I have made it work this way:
[...]
VIRTUAL_HOST=www.example.com
VIRTUAL_PORT=80
LETSENCRYPT_EMAIL=example@example.com
LETSENCRYPT_HOST=www.example.com
[...]
[...]
expose:
- "${SHARELATEX_LISTEN_IP:-127.0.0.1}:${SHARELATEX_PORT:-80}:80"
[...]
networks:
- overleaf_default
networks:
overleaf_default:
external : true
docker network create overleaf_default
docker network connect overleaf_default nginx-proxy
bin/up -d
Ok, let me explain what I've done following your way. I added some variables in the config/variables.env file:
[...]
VIRTUAL_HOST=sub.domain.com
VIRTUAL_PORT=8090
VIRTUAL_PATH=/tex/
VIRTUAL_DEST=/
LETSENCRYPT_HOST=sub.domain.com
[...]
I modified port in the config/overleaf.rc file:
[...]
SHARELATEX_LISTEN_IP=0.0.0.0
SHARELATEX_PORT=8090
[...]
I modified the lib/docker-compose.base.yml file such that:
services:
sharelatex:
[...]
networks:
- overleaf_default
- proxy_frontend
networks:
proxy_frontend:
external: true
overleaf_default:
external: true
It seems to run smoothly:
# bin/up
Starting redis ... done
Starting mongo ... done
Starting sharelatex ... done
Attaching to redis, mongo, sharelatex
mongo |
mongo | WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
mongo | see https://jira.mongodb.org/browse/SERVER-54407
mongo | see also https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610/2
mongo | see also https://github.com/docker-library/mongo/issues/485#issuecomment-891991814
mongo |
mongo | 2022-01-14T11:03:48.733+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongo | 2022-01-14T11:03:48.736+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=f1dd876c9cf7
mongo | 2022-01-14T11:03:48.736+0000 I CONTROL [initandlisten] db version v4.0.27
mongo | 2022-01-14T11:03:48.736+0000 I CONTROL [initandlisten] git version: d47b151b55f286546e7c7c98888ae0577856ca20
mongo | 2022-01-14T11:03:48.736+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] allocator: tcmalloc
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] modules: none
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] build environment:
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] distmod: ubuntu1604
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] distarch: x86_64
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] target_arch: x86_64
mongo | 2022-01-14T11:03:48.737+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
mongo | 2022-01-14T11:03:48.737+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo | 2022-01-14T11:03:48.737+0000 I STORAGE [initandlisten]
mongo | 2022-01-14T11:03:48.737+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
mongo | 2022-01-14T11:03:48.737+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
mongo | 2022-01-14T11:03:48.737+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=5375M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
mongo | 2022-01-14T11:03:49.624+0000 I STORAGE [initandlisten] WiredTiger message [1642158229:624541][1:0x7f3a85926a80], txn-recover: Main recovery loop: starting at 4/341248 to 5/256
mongo | 2022-01-14T11:03:49.718+0000 I STORAGE [initandlisten] WiredTiger message [1642158229:718476][1:0x7f3a85926a80], txn-recover: Recovering log 4 through 5
redis | 1:C 14 Jan 2022 11:03:48.541 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 14 Jan 2022 11:03:48.541 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 14 Jan 2022 11:03:48.541 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mongo | 2022-01-14T11:03:49.808+0000 I STORAGE [initandlisten] WiredTiger message [1642158229:808169][1:0x7f3a85926a80], txn-recover: Recovering log 5 through 5
mongo | 2022-01-14T11:03:49.855+0000 I STORAGE [initandlisten] WiredTiger message [1642158229:855372][1:0x7f3a85926a80], txn-recover: Set global recovery timestamp: 0
redis | 1:M 14 Jan 2022 11:03:48.542 * Running mode=standalone, port=6379.
redis | 1:M 14 Jan 2022 11:03:48.542 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis | 1:M 14 Jan 2022 11:03:48.542 # Server initialized
redis | 1:M 14 Jan 2022 11:03:48.542 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
mongo | 2022-01-14T11:03:50.110+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
sharelatex | *** Running /etc/my_init.d/00_make_sharelatex_data_dirs.sh...
redis | 1:M 14 Jan 2022 11:03:48.542 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
mongo | 2022-01-14T11:03:50.111+0000 I STORAGE [initandlisten] Starting to check the table logging settings for existing WiredTiger tables
redis | 1:M 14 Jan 2022 11:03:48.543 * DB loaded from disk: 0.000 seconds
redis | 1:M 14 Jan 2022 11:03:48.543 * Ready to accept connections
mongo | 2022-01-14T11:03:50.159+0000 I CONTROL [initandlisten]
mongo | 2022-01-14T11:03:50.159+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
mongo | 2022-01-14T11:03:50.159+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten]
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten]
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten]
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
mongo | 2022-01-14T11:03:50.160+0000 I CONTROL [initandlisten]
mongo | 2022-01-14T11:03:50.223+0000 I STORAGE [initandlisten] Finished adjusting the table logging settings for existing WiredTiger tables
mongo | 2022-01-14T11:03:50.225+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo | 2022-01-14T11:03:50.228+0000 I NETWORK [initandlisten] waiting for connections on port 27017
mongo | 2022-01-14T11:03:58.356+0000 I NETWORK [listener] connection accepted from 127.0.0.1:41708 #1 (1 connection now open)
mongo | 2022-01-14T11:03:58.369+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:41708 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.27" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo | 2022-01-14T11:03:58.376+0000 I NETWORK [conn1] end connection 127.0.0.1:41708 (0 connections now open)
sharelatex | *** Running /etc/my_init.d/00_regen_sharelatex_secrets.sh...
sharelatex | *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
sharelatex | *** Running /etc/my_init.d/00_set_docker_host_ipaddress.sh...
sharelatex | *** Running /etc/my_init.d/01_nginx_config_template.sh...
sharelatex | Nginx: generating config file from template
sharelatex | Nginx: reloading config
sharelatex | * Reloading nginx configuration nginx
sharelatex | ...done.
sharelatex | *** Running /etc/my_init.d/10_delete_old_logs.sh...
sharelatex | *** Running /etc/my_init.d/10_syslog-ng.init...
sharelatex | Jan 14 11:04:00 3bd16ba856ea syslog-ng[58]: syslog-ng starting up; version='3.13.2'
sharelatex | *** Running /etc/my_init.d/98_check_db_access.sh...
sharelatex | Checking can connect to mongo and redis
sharelatex | Using default settings from /var/www/sharelatex/web/config/settings.defaults.js
sharelatex | Using settings from /etc/sharelatex/settings.js
mongo | 2022-01-14T11:04:01.452+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51798 #2 (1 connection now open)
mongo | 2022-01-14T11:04:01.459+0000 I NETWORK [conn2] received client metadata from 172.30.9.4:51798 conn2: { driver: { name: "nodejs", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", application: { name: "web" } }
sharelatex | Mongodb is up.
mongo | 2022-01-14T11:04:01.469+0000 I NETWORK [conn2] end connection 172.30.9.4:51798 (0 connections now open)
sharelatex | Using default settings from /var/www/sharelatex/web/config/settings.defaults.js
sharelatex | Using settings from /etc/sharelatex/settings.js
sharelatex | Redis is up.
sharelatex | All checks passed
sharelatex | *** Running /etc/my_init.d/99_run_web_migrations.sh...
sharelatex | Running migrations for server-ce
sharelatex |
sharelatex | > web-overleaf@0.1.4 migrations /var/www/sharelatex/web
sharelatex | > east "migrate" "-t" "server-ce"
sharelatex |
sharelatex | Using default settings from /var/www/sharelatex/web/config/settings.defaults.js
sharelatex | Using settings from /etc/sharelatex/settings.js
mongo | 2022-01-14T11:04:02.301+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51806 #3 (1 connection now open)
mongo | 2022-01-14T11:04:02.308+0000 I NETWORK [conn3] received client metadata from 172.30.9.4:51806 conn3: { driver: { name: "nodejs", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", application: { name: "web" } }
mongo | 2022-01-14T11:04:02.322+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51808 #4 (2 connections now open)
mongo | 2022-01-14T11:04:02.323+0000 I NETWORK [conn4] received client metadata from 172.30.9.4:51808 conn4: { driver: { name: "nodejs", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", application: { name: "web" } }
mongo | 2022-01-14T11:04:03.067+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51812 #5 (3 connections now open)
sharelatex | Nothing to migrate
mongo | 2022-01-14T11:04:03.094+0000 I NETWORK [conn5] end connection 172.30.9.4:51812 (2 connections now open)
mongo | 2022-01-14T11:04:03.094+0000 I NETWORK [conn4] end connection 172.30.9.4:51808 (1 connection now open)
mongo | 2022-01-14T11:04:03.094+0000 I NETWORK [conn3] end connection 172.30.9.4:51806 (0 connections now open)
sharelatex | Finished migrations
sharelatex | *** Booting runit daemon...
sharelatex | *** Runit started as PID 114
sharelatex | Jan 14 11:04:03 3bd16ba856ea cron[132]: (CRON) INFO (pidfile fd = 3)
sharelatex | Jan 14 11:04:03 3bd16ba856ea cron[132]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
mongo | 2022-01-14T11:04:04.323+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51834 #6 (1 connection now open)
mongo | 2022-01-14T11:04:04.331+0000 I NETWORK [conn6] received client metadata from 172.30.9.4:51834 conn6: { driver: { name: "nodejs", version: "3.6.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.354+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51836 #7 (2 connections now open)
mongo | 2022-01-14T11:04:04.363+0000 I NETWORK [conn7] received client metadata from 172.30.9.4:51836 conn7: { driver: { name: "nodejs", version: "3.6.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.398+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51838 #8 (3 connections now open)
mongo | 2022-01-14T11:04:04.405+0000 I NETWORK [conn8] received client metadata from 172.30.9.4:51838 conn8: { driver: { name: "nodejs", version: "3.6.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.408+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51840 #9 (4 connections now open)
mongo | 2022-01-14T11:04:04.415+0000 I NETWORK [conn9] received client metadata from 172.30.9.4:51840 conn9: { driver: { name: "nodejs", version: "3.6.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.464+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51842 #10 (5 connections now open)
mongo | 2022-01-14T11:04:04.474+0000 I NETWORK [conn10] received client metadata from 172.30.9.4:51842 conn10: { driver: { name: "nodejs", version: "3.6.0" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.579+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51844 #11 (6 connections now open)
mongo | 2022-01-14T11:04:04.589+0000 I NETWORK [conn11] received client metadata from 172.30.9.4:51844 conn11: { driver: { name: "nodejs", version: "3.6.6" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:04.863+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51884 #12 (7 connections now open)
mongo | 2022-01-14T11:04:04.872+0000 I NETWORK [conn12] received client metadata from 172.30.9.4:51884 conn12: { driver: { name: "nodejs", version: "3.6.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)" }
mongo | 2022-01-14T11:04:06.628+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51900 #13 (8 connections now open)
mongo | 2022-01-14T11:04:06.628+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51902 #14 (9 connections now open)
mongo | 2022-01-14T11:04:06.637+0000 I NETWORK [conn13] received client metadata from 172.30.9.4:51900 conn13: { driver: { name: "nodejs|Mongoose", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", version: "3.6.2|5.10.9", application: { name: "web" } }
mongo | 2022-01-14T11:04:06.638+0000 I NETWORK [conn14] received client metadata from 172.30.9.4:51902 conn14: { driver: { name: "nodejs", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", application: { name: "web" } }
mongo | 2022-01-14T11:04:06.661+0000 I NETWORK [listener] connection accepted from 172.30.9.4:51920 #15 (10 connections now open)
mongo | 2022-01-14T11:04:06.662+0000 I NETWORK [conn15] received client metadata from 172.30.9.4:51920 conn15: { driver: { name: "nodejs|Mongoose", version: "3.6.2" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.18.0-348.7.1.el8_5.x86_64" }, platform: "'Node.js v12.22.5, LE (unified)", version: "3.6.2|5.10.9", application: { name: "web" } }
mongo | 2022-01-14T11:04:08.624+0000 I NETWORK [listener] connection accepted from 127.0.0.1:41860 #16 (11 connections now open)
mongo | 2022-01-14T11:04:08.637+0000 I NETWORK [conn16] received client metadata from 127.0.0.1:41860 conn16: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.27" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo | 2022-01-14T11:04:08.644+0000 I NETWORK [conn16] end connection 127.0.0.1:41860 (10 connections now open)
mongo | 2022-01-14T11:04:19.210+0000 I NETWORK [listener] connection accepted from 127.0.0.1:41916 #17 (11 connections now open)
mongo | 2022-01-14T11:04:19.224+0000 I NETWORK [conn17] received client metadata from 127.0.0.1:41916 conn17: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.27" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
mongo | 2022-01-14T11:04:19.231+0000 I NETWORK [conn17] end connection 127.0.0.1:41916 (10 connections now open)
The problem is that when I try to load https://sub.domain.com/tex
in browser 502 Bad Gateway
error is given.
What is wrong?
Thank you.
Ivan
any update on this issue? I encountered the same 502 Bad gateway error when I tried to visit the installation from an existing nginx-proxy.
Thanks.
No news...
Sorry, I changed this to deploy directly in Kubernetes with cert-manager and traefik. In any case I did not try with the configuration with path based routing... I would try first with the changes that I commented initially (even with VIRTUAL_PORT 80), see if it works this way and once it is working see the necessary changes for path based routing.
Having the same issue. Where is VIRTUAL_PATH
even used? I tried a search on both repos and don't see it referenced anywhere.
I've been trying to get overleaf to run on a local server that has other websites, so I can't just route all traffic from /
. I'm guessing VIRTUAL_PATH
is the base path of the overleaf server? e.g.: localhost:80/$VIRTUAL_PATH/login
?
Same issue
@Kr0n0 i am aiming for a docker-compose installation behind traefik. You seem to be the first person i could find online, who has done this with overleaf-toolkit. Is there any chance that you could bless me with your knowledge?
Also a quick question, since I am experimenting with authentik atm: SAML and OpenID Connect are only support, if I would pay for Overleaf Pro, is that correct?
I switched from on old sharelatex installation with a manual created docker-compose.yml and a Traefik 2.x config to the overleaf-toolkit and this is how I got it running: config/overleaf.rc - unchanged config/variables.env - unchanged
- lib/docker-compose.base.yml disabled ports
# ports:
# - "${OVERLEAF_LISTEN_IP:-127.0.0.1}:${OVERLEAF_PORT:-80}:80"
and added
labels:
- "traefik.enable=true"
- "traefik.http.routers.overleaf.rule=Host(`overleaf.myhost.net`)"
- "traefik.http.services.overleaf.loadbalancer.server.port=80"
- "traefik.docker.network=traefik_default"
networks:
- traefik_default
- backend
networks:
backend:
internal: true
traefik_default:
external: true
traefik_default is my external traefik network
- lib/docker-compose.mongo.yml added
networks:
- backend
- lib/docker-compose.redis.yml added
networks:
- backend
For Traefik 3.x you might need to change the config. As I don't have the time to switch to 3.x in the near future that is all I can provide!
Hello! I want to install overleaf behind an existing nginx-proxy actually serving some other applications on different subdomains and paths (VIRTUAL_PATH, from nginx-proxy:dev). See below, the proxy's container configuration:
As you can see, proxy also provide SSL certificates. When I want run an application behind my existing proxy I use the following directive in docker-compose.yml:
How can I implement the same with the overleaf toolkit installation? Thanks for help!
Ivan