iv-org / documentation

The official Invidious documentation
https://docs.invidious.io
Creative Commons Zero v1.0 Universal
567 stars 154 forks source link

[SetupQuestion] nginxcontainererrors cannot connect to upstream #564

Open warioishere opened 2 months ago

warioishere commented 2 months ago

Hello guys, this is not really a bug, but more a setup problem we have on our instance. We run a public instance (https://invidious.yourdevice.ch) and with run it as a docker deployed instance with mutlipe containers restarting from time to time as suggested in the docs. We also added http3 proxy, and ipv6 log rotation to the setup.

When I check logs of the container nginx instance, (invidious-nginx-1 container) its full of those entries:

2024/06/03 07:57:55 [error] 29#29: *376409 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /feed/popular HTTP/1.1", upstream: "http://[2001:db9::5]:3000/feed/popular", host: "invidious.yourdevice.ch", referrer: "https://invidious.yourdevice.ch/"

An I mean really full. We dont have problems on the instance. Videos do load fast, everything plays fast. No problem at all. Still those logs bother me a bit. Seems like the nginx container cant reach the invidious containers? But if it would be so, then the server wouldnt work at all? Can you guys gimme a hint?

@unixfox @bugmaschine @perennialtech ?

This is our setup:

Nginx Reverse Proxy:

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name invidious.yourdevice.ch;

    access_log off;
    error_log /var/log/nginx/error.log crit;

    ssl_certificate /etc/letsencrypt/live/invidious.yourdevice.ch/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/invidious.yourdevice.ch/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;    # so Invidious knows domain
        proxy_http_version 1.1;     # to keep alive
        proxy_set_header Connection ""; # to keep alive
    }

    location ~ (^/videoplayback|^/vi/|^/ggpht/|^/sb/) {
        proxy_buffering on;
        proxy_buffers 1024 16k;
        proxy_set_header X-Forwarded-For "";
        proxy_set_header CF-Connecting-IP "";
        proxy_hide_header "alt-svc";
        sendfile on;
        sendfile_max_chunk 512k;
        tcp_nopush on;
        aio threads=default;
        aio_write on;
        directio 16m;
        proxy_hide_header Cache-Control;
        proxy_hide_header etag;
        proxy_http_version 1.1;
        proxy_set_header Connection keep-alive;
        proxy_max_temp_file_size 32m;
        access_log off;
        proxy_pass http://unix:/opt/http3-ytproxy/http-proxy.sock;
        add_header Cache-Control private always;
}

    if ($https = '') { return 301 https://$host$request_uri; }  # if not connected to HTTPS, perma-redirect to HTTPS
}

This is the nginx setup for the invidious-nginx container:

user www-data;
events {
    worker_connections 1000;
}
http {
    server {
        listen 3000;
        listen [::]:3000;
        access_log off;

        location / {
            resolver 127.0.0.11;
            set $backend "invidious";
            proxy_pass http://$backend:3000;
            proxy_http_version 1.1; # to keep alive
            proxy_set_header Connection ""; # to keep alive
        }
    }
}

This is our docker-compose.yml

version: "3"
services:
    invidious:
        image: quay.io/invidious/invidious:latest
        deploy:
            replicas: 6
        restart: unless-stopped
        environment:
            INVIDIOUS_CONFIG: |
                channel_threads: 0
                feed_threads: 0
                db:
                    dbname: invidious
                    user: kemal
                    password: kemal
                    host: invidious-db
                    port: 5432
                check_tables: true
                external_port: 443
                domain: invidious.yourdevice.ch
                https_only: true
                statistics_enabled: true
                force_resolve: ipv6
                hmac_key: "xxx"
                #  banner: "by yourdevice.ch"
                #  popular_enabled: true
                registration_enabled: true
                login_enabled: true
                captcha_enabled: true
                enable_user_notifications: true
                use_pubsub_feeds: true
                use_innertube_for_captions: true
                jobs:
                  clear_expired_items:
                    enabled: false
                  refresh_channels:
                    enabled: false
                  refresh_feeds:
                    enabled: false
        healthcheck:
            test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/stats || exit 1
            interval: 30s
            timeout: 5s
            retries: 2
        logging:
            options:
                max-size: "1G"
                max-file: "4"
        depends_on:
           - invidious-db

    invidious-refresh:
        image: quay.io/invidious/invidious:latest
        restart: unless-stopped
        environment:
            INVIDIOUS_CONFIG: |
                db:
                    dbname: invidious
                    user: kemal
                    password: kemal
                    host: invidious-db
                    port: 5432
                    check_tables: true
                check_tables: true
                external_port: 443
                domain: invidious.yourdevice.ch
                https_only: true
                statistics_enabled: true
                force_resolve: ipv6
                hmac_key: "xxx"
                #  banner: "by yourdevice.ch"
                #  popular_enabled: true
                registration_enabled: true
                login_enabled: true
                captcha_enabled: true
                enable_user_notifications: true
                use_pubsub_feeds: true
                use_innertube_for_captions: true
                jobs:
                  clear_expired_items:
                    enabled: false
                  refresh_channels:
                    enabled: false
                  refresh_feeds:
                    enabled: false
        healthcheck:
            test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/stats || exit 1
            interval: 30s
            timeout: 5s
            retries: 2
        logging:
            options:
                max-size: "1G"
                max-file: "4"
        depends_on:
           - invidious-db

    nginx:
        image: nginx:latest
        restart: unless-stopped
        volumes:
            - ./nginx.conf/nginx.conf:/etc/nginx/nginx.conf:ro
        depends_on:
            - invidious
        ports:
            - "3000:3000"

    http3-ytproxy:
        image: 1337kavin/ytproxy:latest
        restart: unless-stopped
        user: "33:33"
        network_mode: "host"
        environment:
            DISABLE_WEBP: 1
        volumes:
           - /opt/http3-ytproxy:/app/socket

    invidious-db:
        image: docker.io/library/postgres:14
        restart: unless-stopped
        volumes:
          - postgresdata:/var/lib/postgresql/data
          - ./config/sql:/config/sql
          - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
        environment:
            POSTGRES_DB: invidious
            POSTGRES_USER: kemal
            POSTGRES_PASSWORD: kemal
        healthcheck:
            test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]

volumes:
    postgresdata:

networks:
  default:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 2001:0DB9::/112
          gateway: 2001:0DB9::1

Thanks for having a look!

Cheers guys

perennialtech commented 2 months ago

@warioishere For this question, you may have better luck asking in the Invidious Instance Owners room on Matrix (see https://github.com/iv-org/documentation/issues/521#issuecomment-202409589).

Also, since your hmac_key is still visible in the edit history of your issue and in emails sent to people watching this repository, you should generate a new key to replace it completely.

warioishere commented 2 months ago

key is already regenerated because I realized it when reading through again. If anyone knows about this issue, let me know.

unixfox commented 2 months ago

I think it's probably because nginx is trying to send the traffic in ipv6 to the container, but the container does not listen on ipv6.

There is an issue about it here: https://github.com/iv-org/invidious/issues/4705

One temporary way would be to force Invidious to listen on IPv6 by adding this parameter in the invidious config:

host_binding: [::]
warioishere commented 2 months ago

host_binding: [::]

add this to invidious and invidious-refresh config?

unixfox commented 2 months ago

host_binding: [::]

add this to invidious and invidious-refresh config?

Only the docker containers that receive traffic from NGINX. So invidious only.

warioishere commented 2 months ago

Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx

unixfox commented 2 months ago

Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx

It's called happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs

If the software can't connect in ipv6, it will try in ipv4 after.

warioishere commented 2 months ago

Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx

It's called happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs

If the software can't connect in ipv6, it will try in ipv4 after.

Okay thanks, understood

adding host_binding: [::] to the invidious container is making the instance not work at all anymore. I get the same logs as before, and additional to that refused connection on container IPv4 adresses

unixfox commented 2 months ago

what are the invidious logs though?

warioishere commented 2 months ago

what are the invidious logs though?

the nginx container logs are the same plus:

2024/06/03 07:57:55 [error] 29#29: *376409 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /feed/popular HTTP/1.1", upstream: "http://**containeripv4adresses**:3000/feed/popular", host: "invidious.yourdevice.ch", referrer: "https://invidious.yourdevice.ch/"

didnt check the invidious container logs. Just no more access at all to invidious web (nginx bad gateway)

do you need the invidious container logs? I reverted instantly because running this as a public instance.

unixfox commented 2 months ago

yes try to see if invidious process is starting normally in order to know what's really going on.

warioishere commented 2 months ago

yes try to see if invidious process is starting normally in order to know what's really going on.

these are the logs from nginx

2024/06/08 07:50:50 [error] 29#29: *425 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717409590:778bb6ad:5c8afda12b9a7fb58c599f95428c7609e1383807 HTTP/1.1", upstream: "http://172.19.0.9:3000/feed/webhook/v1:1717409590:778bb6ad:5c8afda12b9a7fb58c599f95428c7609e1383807", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *413 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "GET /feed/channel/UCyCA8oBIrBMjIoUEyvjI4UA HTTP/1.1", upstream: "http://[2001:db9::7]:3000/feed/channel/UCyCA8oBIrBMjIoUEyvjI4UA", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *421 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717409591:d1d5a84d:40b04c1ffc91e460f927f103f9d0af95086241a8 HTTP/1.1", upstream: "http://[2001:db9::7]:3000/feed/webhook/v1:1717409591:d1d5a84d:40b04c1ffc91e460f927f103f9d0af95086241a8", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *415 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717755237:9f9315c0:859d4ed037aa3c9915b334800bf8fd1e79838c3c HTTP/1.1", upstream: "http://[2001:db9::9]:3000/feed/webhook/v1:1717755237:9f9315c0:859d4ed037aa3c9915b334800bf8fd1e79838c3c", host: "invidious.yourdevice.ch"

invidious containers wont start if I add host_binding: [::] to invidious section:

Unhandled exception: did not find expected node content at line 15, column 16, while parsing a flow node at line 15, column 16 (YAML::ParseException)
  from /usr/share/crystal/src/yaml/pull_parser.cr:339:5 in 'raise'
  from /usr/share/crystal/src/yaml/pull_parser.cr:122:7 in 'read_next'
  from /usr/share/crystal/src/yaml/parser.cr:128:11 in 'parse_node'
  from /usr/share/crystal/src/yaml/parser.cr:141:43 in 'parse_node'
  from /usr/share/crystal/src/yaml/nodes/nodes.cr:47:7 in 'parse'
  from /usr/share/crystal/src/yaml/from_yaml.cr:26:3 in 'load'
  from src/invidious.cr:60:12 in '~CONFIG:init'
  from /usr/share/crystal/src/crystal/once.cr:25:54 in '__crystal_once'
  from src/invidious/user/cookies.cr:9:35 in '__crystal_main'
  from /usr/share/crystal/src/crystal/main.cr:115:5 in 'main'
  from src/env/__libc_start_main.c:95:2 in 'libc_start_main_stage2'

besides of that, my instance stops working just from time to time showing an internals server error when loading videos. I dont know whats wrong here, but it seems that the ipv6 rotator doesnt really seem to work because it throws error from every second time.

All those problems didnt occur when I was just on ipv4

is there anything I need to redo on the netplan yml? Because there is a fixed ipv6 assigned which always shows in ip addr

network:
    version: 2
    ethernets:
        eth0:
            addresses:
            - 188.68.48.233/22
            - 2a03:4000:6:d059:988b:69ff:fe8c:52dc/64
            match:
                macaddress: 9a:8b:69:8c:52:dc
            routes:
            -   to: default
                via: 188.68.48.1
            -   on-link: true
                to: default
                via: fe80::1
warioishere commented 2 months ago

current situation without host_binding: [::]

2024-06-08 08:00:43 UTC [info] 200 GET /feed/popular 3.44ms
2024-06-08 08:00:44 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:00:44 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 47.56ms
2024-06-08 08:00:48 UTC [info] 200 GET /api/v1/stats 142.69µs
2024-06-08 08:00:49 UTC [info] 200 GET /api/v1/stats 83.92µs
2024-06-08 08:00:50 UTC [info] 200 GET /css/pure-min.css?v=eda7444 705.8µs
2024-06-08 08:00:50 UTC [info] 200 GET /css/default.css?v=eda7444 591.28µs
2024-06-08 08:00:50 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:00:50 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 65.51ms
2024-06-08 08:00:51 UTC [info] 200 GET /feed/popular 2.14ms
2024-06-08 08:00:53 UTC [info] 302 GET / 74.25µs
2024-06-08 08:00:53 UTC [info] 200 POST /feed/webhook/v1:1717757778:fc3151fa:8da13b7b997e101e24101763e570a780769a2452 116.9µs
2024-06-08 08:00:54 UTC [error] /feed/webhook/v1:1717404241:aab13105:bf81d33812bf58e836dce838462439713ca4a97a : Invalid signature
2024-06-08 08:00:54 UTC [info] 200 POST /feed/webhook/v1:1717404241:aab13105:bf81d33812bf58e836dce838462439713ca4a97a 321.06µs
2024-06-08 08:00:54 UTC [info] 200 POST /feed/webhook/v1:1717750181:b9b9aa78:480d08f778984a6704becffb7417fca826ca10c3 100.74µs
2024-06-08 08:00:59 UTC [info] 200 POST /feed/webhook/v1:1717515483:aaca537b:f7541e3c75ed26bd3a075c760428d3762dc6f0d6 147.9µs
2024-06-08 08:01:01 UTC [error] /feed/webhook/v1:1717403601:678a6f7a:5a22a3d54871323c81d402bcccfebba6d4cf728f : Invalid signature
2024-06-08 08:01:01 UTC [info] 200 POST /feed/webhook/v1:1717403601:678a6f7a:5a22a3d54871323c81d402bcccfebba6d4cf728f 240.93µs
2024-06-08 08:01:04 UTC [info] 200 HEAD / 146.06µs
2024-06-08 08:01:10 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:10 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 97.92ms
2024-06-08 08:01:10 UTC [info] 302 GET / 71.52µs
2024-06-08 08:01:10 UTC [info] 200 GET /feed/popular 2.24ms
2024-06-08 08:01:13 UTC [info] 200 GET /feed/popular 4.17ms
2024-06-08 08:01:14 UTC [error] get_video: eBGIQ7ZuuiU : This helps protect our community. Learn more
2024-06-08 08:01:14 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:14 UTC [info] 500 GET /watch?v=eBGIQ7ZuuiU 54.63ms
2024-06-08 08:01:19 UTC [info] 200 GET /api/v1/stats 97.16µs
2024-06-08 08:01:22 UTC [info] 200 GET /api/v1/stats 154.04µs
2024-06-08 08:01:25 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:25 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 60.33ms

grafik

warioishere commented 2 months ago

just realized its a global problem. So ignore this about the internal server error. Logs when activating hosting_ping: [::] can be seen above in the post before

unixfox commented 2 months ago

Ok thanks for the investigation. I'll do my own very soon hopefully.