Closed whowantsmybigdata closed 5 months ago
Thanks for your feedback. If it is a new installation, you only need to modify the docker-compose.yml and start the seafile-docker
seafile:
ports:
- "8001:80"
environment:
- SEAFILE_SERVER_HOSTNAME=192.168.0.2:8001
If it is an upgrade installation, you also need to modify the yml file. And modify the configuration file conf/seahub_settings.py
, then restart seafile-server.
SERVICE_URL = "http://192.168.0.2:8001"
FILE_SERVER_ROOT = "http://192.168.0.2:8001/seafhttp"
great thanks a lot for helping! really appreciate it.
Ok, with all the changes mentioned I don't get a 5502 bad gateway
error anymore but a SSL_ERROR_RX_RECORD_TOO_LONG
)...
Also although I put FORCE_HTTPS_IN_CONF=true
the nginx-conf file generated was completely without ssl config
ok nevermind, found the solution myself while not beeing too tired... for anybody who might stumble across this as well:
Solution is I got the nginx-config wrong.
When using the non-docker version I had:
listen 8001 http2 ssl;
server_name example.com;
Now using docker it only works with:
server {
listen 80 ssl http2;
server_name example.com:8001
@whowantsmybigdata Nginx has a notorious feature you might want to use, which is reverse proxying.
In a nutshell, a reverse proxy server allows for redirecting some localhost:1234
service (or Docker container) to another port, classically 80 (HTTP) and 443 (HTTPS), so that the service is exposed as a nice and clean URL, free from any non-standard port number.
As an example, here's my own Nginx configuration file featuring a reverse proxy (you will notice the proxy_pass
directive) for my own Seafile container exposed locally on ports 81 and 442, that I wanted to be exposed globally on ports 80 and 443, respectively:
server {
listen 80;
listen [::]:80 http2 ipv6only=on;
server_name myseafile.mydomain.com;
client_max_body_size 0;
location / {
proxy_pass http://localhost:81;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myseafile.mydomain.com;
client_max_body_size 0;
location / {
proxy_pass https://localhost:442;
}
ssl_certificate /opt/seafile-data/ssl/myseafile.mydomain.com.crt;
ssl_certificate_key /opt/seafile-data/ssl/myseafile.mydomain.com.key;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Hope it helps.
thanks a lot for your help! I would definitely use that if I wouldn't have other services using ports 80 and 443 on the same machine... Its only a Raspberry Pi running a few small-scale cloud-services for family and friends. If I would use it on a VPS or with a bigger home server capable of running VMs I would do it the way you mentioned. But I wouldn't have known how so thanks for the insight anyway.
Deploying without docker was working but since there is no offical builder for the arm anymore I wanted to switch and now I get an
502 Bad Gateway
and in seahub-error.logSystem:
seafile-mc:latest
so by the time of writing 11.0.2I have another service using ports 80 and 443, so I need to use a different one (undockerized I used 8003). Unfortunately there is no manual for that and I don't get the solution mentioned here because I have no idea what is the bootstrap.conf is and didn't manage to get it working while trying to figure it out.
conf
,ccnet
(as it was asking for it) andseahub-data
folders and updated the conf-files to my local-ip.I tried the following: in docker-compose.yml under
ports:
"8001:8001"
(502 error from nginx but no log?!) or"443:443"
(port already allocated error from docker) or"8001:443"
(connection failed) andSEAFILE_SERVER_HOSTNAME=[my domain]:8001
(as in ccnet.conf) orSEAFILE_SERVER_HOSTNAME=[my domain]
under nginx.conf
listen 8001 http2 ssl
andproxy_set_header Host $http_host;
orproxy_set_header Host $host:8001
(which I was using without docker).I don't know what to try next. Switching back to the unofficial non-docker arm64 build for Seafile 10, with the same nginx.conf, same data and same database is working normally.
Thanks for your help