linuxserver / docker-healthchecks

GNU General Public License v3.0
178 stars 38 forks source link

Enable IPv6 support on uwsgi #123

Closed jdeluyck closed 2 months ago

jdeluyck commented 2 months ago

linuxserver.io

Closes #122



Description:

Enable IPV6 on uwsgi

Benefits of this PR and context:

Allows the container to be accessed over ipv6 inside the docker network.

How Has This Been Tested?

Built the container, tested it on my local installation.

Source / References:

Roxedus commented 2 months ago

How does uwsgi handle this on hosts with ipv6 disabled in the kernel?

thespad commented 2 months ago

It breaks, as with anything using dual IPv4/IPv6 sockets. But in a choice between supporting IPv6 and supporting setups with IPv6 intentionally disabled, we should probably favour the former. Also it doesn't appear possible to bind the smtpd service to IPv6 in either case.

This needs adding to the readme changelog; it's a significant change.

LinuxServer-CI commented 2 months ago
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-171f6b9047f130722a6cfece453a3ca851eb8dff-pr-123/index.html https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-171f6b9047f130722a6cfece453a3ca851eb8dff-pr-123/shellcheck-result.xml Tag Passed
amd64-v3.5.2-pkg-183ee8f1-dev-171f6b9047f130722a6cfece453a3ca851eb8dff-pr-123
arm64v8-v3.5.2-pkg-183ee8f1-dev-171f6b9047f130722a6cfece453a3ca851eb8dff-pr-123
jdeluyck commented 2 months ago

On a non-ipv6 network, everything works as it was:

Create ipv4 only test network

podman network create test

Executing the homebuild container

podman run -d \
  --name=healthchecks \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/Brussels \
  -e SITE_ROOT="http://localhost:8000" \
  -e SITE_NAME= \
  -e SUPERUSER_EMAIL= \
  -e SUPERUSER_PASSWORD= \
  -p 8000:8000 --network=test \
  4ea49146d2c6

Inspecting the test network

podman network inspect test 
[
     {
          "name": "test",
          "id": "6f457fad19bac09590a1e200213278a49ebc10d897d1777f3735416df07233e3",
          "driver": "bridge",
          "network_interface": "podman1",
          "created": "2024-08-31T15:52:00.353622718+02:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {
               "5df785fa3f74d48d414dd85c2ab79cc599e09785724eb89cd8f06f3a7c8bdcb2": {
                    "name": "healthchecks",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.2/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "26:c4:4d:02:1f:6b"
                         }
                    }
               }
          }
     }
]

As you can see, only ipv4.

Checking inside the container

$ podman exec -ti healthchecks /bin/bash 

root@5df785fa3f74:/# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:2525            0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:42776         127.0.0.1:2525          TIME_WAIT   
tcp        0      0 :::8000                 :::*                    LISTEN      
tcp        0      0 ::1:48926               ::1:8000                TIME_WAIT   

root@5df785fa3f74:/# cd /app/healthchecks/
root@5df785fa3f74:/app/healthchecks# cat uwsgi.ini 
[uwsgi]
http-socket = [::]:8000
...

Starting a test nginx container on the same network

$ podman run -d nginx --network test
$ podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS        PORTS                             NAMES
5df785fa3f74  4ea49146d2c6                                          5 minutes ago  Up 5 minutes  0.0.0.0:8000->8000/tcp, 8000/tcp  healthchecks
bfba2da729ef  docker.io/library/nginx:latest  nginx -g daemon o...  3 seconds ago  Up 3 seconds  80/tcp                            tender_mclean

Connecting over the bridge network to the other container on ipv4

$ podman exec -ti tender_mclean /bin/bash

root@bfba2da729ef:/# curl -vvv 10.89.0.2:8000
*   Trying 10.89.0.2:8000...
* Connected to 10.89.0.2 (10.89.0.2) port 8000 (#0)
> GET / HTTP/1.1
> Host: 10.89.0.2:8000
> User-Agent: curl/7.88.1
> Accept: */*
> 
< HTTP/1.1 302 Found
< Content-Type: text/html; charset=utf-8
< Location: /accounts/login/
< X-Frame-Options: DENY
< Content-Length: 0
< Vary: Cookie
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
< 
* Connection #0 to host 10.89.0.2 left intact
Roxedus commented 2 months ago

podman network create test does not behave the same as if a system runs with ipv6 disabled in the kernel

jdeluyck commented 2 months ago

People still do that?

I'll see to spin up a machine with ipv6 disabled but that'll take a bit.

thespad commented 2 months ago

With alarming regularity

jdeluyck commented 2 months ago

Tested on a host without ipv6 enabled - it breaks spectacularly as stated. I've updated the readme, I don't know if an IPV6 warning needs to be put anywhere?

LinuxServer-CI commented 2 months ago
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-86e401e76cd07182b6f2c2551ceb69c0dc3b5b2d-pr-123/index.html https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-86e401e76cd07182b6f2c2551ceb69c0dc3b5b2d-pr-123/shellcheck-result.xml Tag Passed
amd64-v3.5.2-pkg-183ee8f1-dev-86e401e76cd07182b6f2c2551ceb69c0dc3b5b2d-pr-123
arm64v8-v3.5.2-pkg-183ee8f1-dev-86e401e76cd07182b6f2c2551ceb69c0dc3b5b2d-pr-123
thespad commented 2 months ago

No, don't explicitly mention it breaking in the changelog; we'll see if anyone is actually affected and if there are sufficient numbers we may need to add a conditional fudge for them, but we try and avoid it when possible.

LinuxServer-CI commented 2 months ago
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-40aba11df05e0bee2aa84ac2d5512398fca8c104-pr-123/index.html https://ci-tests.linuxserver.io/lspipepr/healthchecks/v3.5.2-pkg-183ee8f1-dev-40aba11df05e0bee2aa84ac2d5512398fca8c104-pr-123/shellcheck-result.xml Tag Passed
amd64-v3.5.2-pkg-183ee8f1-dev-40aba11df05e0bee2aa84ac2d5512398fca8c104-pr-123
arm64v8-v3.5.2-pkg-183ee8f1-dev-40aba11df05e0bee2aa84ac2d5512398fca8c104-pr-123
alxrdn commented 2 months ago

Hello I can confirm that for some reasons, there are configurations with ipv6 disabled. This change made our healthcheck instance silently break for several weeks until we realized we had no more alerts... Then it took us some research before understanding the Address family not supported by protocol [core/socket.c line 82] error was due to an upgrade in the linuxserver image... Is it possible / how can we tell uwsgi to either not try to bind ipv6 or at least to fallback to ipv4 only?

edit: I tried to pass env var UWSGI_HTTP_SOCKET=:8000 from the docker-compose file, this didn't work. It looks like it did bound to ipv4 but then still fails as is it is still also trying to bind to ipv6:

uwsgi socket 0 bound to TCP address :8000 fd 3 socket(): Address family not supported by protocol [core/socket.c line 82]

alxrdn commented 3 weeks ago

For those who would face the same issue and since nobody answered here, the only solution until now is to manually override /defaults/uwsgi.ini and mount it as a docker volume...