Closed c-gabri closed 1 month ago
Closed by mistake, sorry. Charts and notifications seem to be working with
OPENWISP_MONITORING_API_BASEURL="http://<DASHBOARD_DOMAIN>:8081"
OPENWISP_NOTIFICATIONS_HOST="http://<DASHBOARD_DOMAIN>:8081"
If instead of <DASHBOARD_DOMAIN>
I use <API_DOMAIN>
, as is probably intended, I get some errors about CORS which I cannot not get rid of using DJANGO_CORS_HOSTS
or other variables.
I still can't get the openvpn container to load though, these are its logs:
Waiting for dashboard to become available...
Connection with dashboard established.
Enabling IPv6 Forwarding
sysctl: error setting key 'net.ipv6.conf.all.disable_ipv6': Read-only file system
Failed to enable IPv6 support
sysctl: error setting key 'net.ipv6.conf.default.forwarding': Read-only file system
Failed to enable IPv6 Forwarding default
Failed to enable IPv6 Forwarding
sysctl: error setting key 'net.ipv6.conf.all.forwarding': Read-only file system
tar: invalid magic
tar: short read
Internal Server ErrorWaiting for dashboard to become available...
Setting privileged: true
in the openvpn section of the docker-compose.yml
file, as suggested here, changes the logs to:
Waiting for dashboard to become available...
Connection with dashboard established.
Enabling IPv6 Forwarding
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
tar: invalid magic
tar: short read
Internal Server ErrorWaiting for dashboard to become available...
I don't think the issue is IPv6 related, the real symptom seems to be
tar: invalid magic
tar: short read
and the real culprit the tar
call in function openvpn_config_download
in images/common/utils.sh
:
function openvpn_config_download {
curl --silent --retry 10 --retry-delay 5 --retry-max-time 300\
--insecure --output vpn.tar.gz \
${API_INTERNAL}/controller/vpn/download-config/$UUID/?key=$KEY
curl --silent --insecure -outputO checksum \
${API_INTERNAL}/controller/vpn/checksum/$UUID/?key=$KEY
tar xzf vpn.tar.gz
chmod 600 *.pem
}
But this function uses API_INTERNAL
, which evaluates to the internal domain "api.internal", so I don't understand why it would be affected by my external port change.
openwisp-nginx container repeating log entry:
[f48437d097aa] - - [27/Sep/2024:14:38:12 +0200] "GET /controller/vpn/download-config/4b8f058e-52e1-48bd-843d-5e51266d9a17/?key=<KEY> HTTP/1.1" status: 500 32 "-" "curl/7.79.1" http_x_forwarded_for: - - remote_addr: 172.18.0.6 - realip_remote_addr: 172.18.0.6 - real_ip: 172.18.0.6
openwisp-api container repeating log entry:
--- no python application found, check your startup logs for errors ---
[aac2dd5d0529] - pid: 38 172.18.0.6 (-) {32 vars in 553 bytes} [Fri Sep 27 14:31:50 2024] GET /controller/vpn/download-config/4b8f058e-52e1-48bd-843d-5e51266d9a17/?key=<KEY> => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 1)
openvpn containers starts correctly if I set API_INTERNAL=<DASHBOARD_DOMAIN>:8081
in .env
! :partying_face:
So far, these are the changes necessary to make docker-openwisp work (maybe) on non-default ports <HTTP_PORT>
and <HTTPS_PORT>
(with SSL_CERT_MODE=No
) and thus to make it work behind a reverse proxy listening on ports 80 and 443:
80:80
to <HTTP_PORT>:80
and 443:443
to <HTTPS_PORT>:443
in the nginx section of docker-compose.yml
OPENWISP_MONITORING_API_BASEURL
and OPENWISP_NOTIFICATIONS_HOST
in custom_django_settings.py
to http://<DASHBOARD_DOMAIN>:<HTTP_PORT>
(not <API_DOMAIN>
)API_INTERNAL
in .env
to <DASHBOARD_DOMAIN>:<HTTP_PORT>
(not <API_DOMAIN>
)This looks too hacky though (maybe even insecure?) and I'll do further testing to see if it breaks other things. I suspect these other custom_django_settings.py
variables may need to be set: OPENWISP_CONTROLLER_API_HOST
, OPENWISP_FIRMWARE_API_BASEURL
and OPENWISP_NETWORK_TOPOLOGY_API_BASEURL
The ideal would be to have NGINX_SSL_PORT
and NGINX_80_PORT
variables in .env
that properly implement what I'm doing (feature request).
To make this work with HTTPS behind my own reverse proxy (Nginx) I've had to make additional changes:
OPENWISP_MONITORING_API_BASEURL
and OPENWISP_NOTIFICATIONS_HOST
set to https://<DASHBOARD_URL>
, otherwise I get errors in my browser's console about mixed HTTPS and HTTP contentCSRF_TRUSTED_ORIGINS=["https://<DASHBOARD_DOMAIN>"]
to get rid of a CSRF errorAPI_INTERNAL
remains set to the previous value to make the openvpn container start.
To anyone trying this, please be aware that I got here by trial and error only. I don't know exactly why this works and whether it breaks things I'm not yet aware of (will test further). Even if it didn't, I have a strong feeling it's not the proper way to achieve the desired result and urge the developers to implement this properly.
This is my nginx configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
server_name <DASHBOARD_DOMAIN> <API_DOMAIN>;
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
server_name <DASHBOARD_DOMAIN> <API_DOMAIN>;
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
ssl_certificate <SSL_CERT_PATH>;
ssl_certificate_key <SSL_CERT_KEY_PATH>;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m;
ssl_session_tickets off;
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate <SSL_TRUSTED_CERT_PATH>;
resolver 127.0.0.53;
location / {
proxy_pass http://127.0.0.1:8081;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Weird. Now, again behind my own Nginx with HTTPS configured as above and SSL_CERT_MODE=External
, it all seems to be working with the default settings, except for CSRF_TRUSTED_ORIGINS=["https://<DASHBOARD_DOMAIN>"]
, which is still needed. I don't recall that to be the case at all the first time I tried this configuration.
I guess I shouldn't have bothered verifying everything was working with HTTP and no reverse proxy and should have configured HTTPS with reverse proxy straight away.
I'll mark this as closed, although for consistency one should probably be able to change openwisp-nginx ports with .env
variables.
Describe the bug In order to run docker-openwisp behind my own reverse proxy (other web services will be running on the server), I have changed the openwisp-nginx ports to 8081 and 4431 in the
docker-compose.yml
file. After that I get the error "Something went wrong while loading the charts" when visiting the dashboard. Notifications also don't load.Setting
OPENWISP_MONITORING_API_BASEURL="http://<API_DOMAIN>:8081"
incustom_django_settings.py
looks like a step in the right direction, as requests to the API URL get a 500 response instead of no response at all, but it doesn't solve the issue and introduces a new one: openwisp-openvpn container not starting.Not sure if running on different ports is meant to be supported, there's no variables to set in the
.env
file to easily do so after all. But being such an essential feature for a Docker application I file as bug rather than feature request.Steps To Reproduce
.env
file (SSL_CERT_MODE=No
)http://<DASHBOARD_DOMAIN>
and verify everything seems to be working80:80
to8081:80
(and443:443
to4431:443
) for the nginx service in thedocker-compose.yml
filedocker compose stop && docker-compose up -d
)http://<DASHBOARD_DOMAIN>:8081
and verify everything seems to be working, except for charts and notifications not loadingOPENWISP_MONITORING_API_BASEURL="http://<API_DOMAIN>:8081"
incustomization/configuration/django/custom_django_settings.py
, restart docker-openwisp and observe the same problem, plus the openwisp-openvpn container not startingExpected behavior Charts, notifications and openvpn loading correctly
Screenshots
Without
OPENWISP_MONITORING_API_BASEURL="http://<API_DOMAIN>:8081"
:With
OPENWISP_MONITORING_API_BASEURL="http://<API_DOMAIN>:8081"
:System Informatioon: