Open warioishere opened 5 months ago
@warioishere For this question, you may have better luck asking in the Invidious Instance Owners room on Matrix (see https://github.com/iv-org/documentation/issues/521#issuecomment-202409589).
Also, since your hmac_key
is still visible in the edit history of your issue and in emails sent to people watching this repository, you should generate a new key to replace it completely.
key is already regenerated because I realized it when reading through again. If anyone knows about this issue, let me know.
I think it's probably because nginx is trying to send the traffic in ipv6 to the container, but the container does not listen on ipv6.
There is an issue about it here: https://github.com/iv-org/invidious/issues/4705
One temporary way would be to force Invidious to listen on IPv6 by adding this parameter in the invidious config:
host_binding: [::]
host_binding: [::]
add this to invidious and invidious-refresh config?
host_binding: [::]
add this to invidious and invidious-refresh config?
Only the docker containers that receive traffic from NGINX. So invidious only.
Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx
Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx
It's called happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs
If the software can't connect in ipv6, it will try in ipv4 after.
Okay thanks, will try this evening as soon as I come home. The big question to me is, why does the setup still work if the containers cannot receive traffic from nginx
It's called happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs
If the software can't connect in ipv6, it will try in ipv4 after.
Okay thanks, understood
adding host_binding: [::]
to the invidious container is making the instance not work at all anymore. I get the same logs as before, and additional to that refused connection on container IPv4 adresses
what are the invidious logs though?
what are the invidious logs though?
the nginx container logs are the same plus:
2024/06/03 07:57:55 [error] 29#29: *376409 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /feed/popular HTTP/1.1", upstream: "http://**containeripv4adresses**:3000/feed/popular", host: "invidious.yourdevice.ch", referrer: "https://invidious.yourdevice.ch/"
didnt check the invidious container logs. Just no more access at all to invidious web (nginx bad gateway)
do you need the invidious container logs? I reverted instantly because running this as a public instance.
yes try to see if invidious process is starting normally in order to know what's really going on.
yes try to see if invidious process is starting normally in order to know what's really going on.
these are the logs from nginx
2024/06/08 07:50:50 [error] 29#29: *425 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717409590:778bb6ad:5c8afda12b9a7fb58c599f95428c7609e1383807 HTTP/1.1", upstream: "http://172.19.0.9:3000/feed/webhook/v1:1717409590:778bb6ad:5c8afda12b9a7fb58c599f95428c7609e1383807", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *413 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "GET /feed/channel/UCyCA8oBIrBMjIoUEyvjI4UA HTTP/1.1", upstream: "http://[2001:db9::7]:3000/feed/channel/UCyCA8oBIrBMjIoUEyvjI4UA", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *421 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717409591:d1d5a84d:40b04c1ffc91e460f927f103f9d0af95086241a8 HTTP/1.1", upstream: "http://[2001:db9::7]:3000/feed/webhook/v1:1717409591:d1d5a84d:40b04c1ffc91e460f927f103f9d0af95086241a8", host: "invidious.yourdevice.ch"
2024/06/08 07:50:51 [error] 29#29: *415 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /feed/webhook/v1:1717755237:9f9315c0:859d4ed037aa3c9915b334800bf8fd1e79838c3c HTTP/1.1", upstream: "http://[2001:db9::9]:3000/feed/webhook/v1:1717755237:9f9315c0:859d4ed037aa3c9915b334800bf8fd1e79838c3c", host: "invidious.yourdevice.ch"
invidious containers wont start if I add host_binding: [::]
to invidious section:
Unhandled exception: did not find expected node content at line 15, column 16, while parsing a flow node at line 15, column 16 (YAML::ParseException)
from /usr/share/crystal/src/yaml/pull_parser.cr:339:5 in 'raise'
from /usr/share/crystal/src/yaml/pull_parser.cr:122:7 in 'read_next'
from /usr/share/crystal/src/yaml/parser.cr:128:11 in 'parse_node'
from /usr/share/crystal/src/yaml/parser.cr:141:43 in 'parse_node'
from /usr/share/crystal/src/yaml/nodes/nodes.cr:47:7 in 'parse'
from /usr/share/crystal/src/yaml/from_yaml.cr:26:3 in 'load'
from src/invidious.cr:60:12 in '~CONFIG:init'
from /usr/share/crystal/src/crystal/once.cr:25:54 in '__crystal_once'
from src/invidious/user/cookies.cr:9:35 in '__crystal_main'
from /usr/share/crystal/src/crystal/main.cr:115:5 in 'main'
from src/env/__libc_start_main.c:95:2 in 'libc_start_main_stage2'
besides of that, my instance stops working just from time to time showing an internals server error when loading videos. I dont know whats wrong here, but it seems that the ipv6 rotator doesnt really seem to work because it throws error from every second time.
All those problems didnt occur when I was just on ipv4
is there anything I need to redo on the netplan yml? Because there is a fixed ipv6 assigned which always shows in ip addr
network:
version: 2
ethernets:
eth0:
addresses:
- 188.68.48.233/22
- 2a03:4000:6:d059:988b:69ff:fe8c:52dc/64
match:
macaddress: 9a:8b:69:8c:52:dc
routes:
- to: default
via: 188.68.48.1
- on-link: true
to: default
via: fe80::1
current situation without host_binding: [::]
2024-06-08 08:00:43 UTC [info] 200 GET /feed/popular 3.44ms
2024-06-08 08:00:44 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:00:44 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 47.56ms
2024-06-08 08:00:48 UTC [info] 200 GET /api/v1/stats 142.69µs
2024-06-08 08:00:49 UTC [info] 200 GET /api/v1/stats 83.92µs
2024-06-08 08:00:50 UTC [info] 200 GET /css/pure-min.css?v=eda7444 705.8µs
2024-06-08 08:00:50 UTC [info] 200 GET /css/default.css?v=eda7444 591.28µs
2024-06-08 08:00:50 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:00:50 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 65.51ms
2024-06-08 08:00:51 UTC [info] 200 GET /feed/popular 2.14ms
2024-06-08 08:00:53 UTC [info] 302 GET / 74.25µs
2024-06-08 08:00:53 UTC [info] 200 POST /feed/webhook/v1:1717757778:fc3151fa:8da13b7b997e101e24101763e570a780769a2452 116.9µs
2024-06-08 08:00:54 UTC [error] /feed/webhook/v1:1717404241:aab13105:bf81d33812bf58e836dce838462439713ca4a97a : Invalid signature
2024-06-08 08:00:54 UTC [info] 200 POST /feed/webhook/v1:1717404241:aab13105:bf81d33812bf58e836dce838462439713ca4a97a 321.06µs
2024-06-08 08:00:54 UTC [info] 200 POST /feed/webhook/v1:1717750181:b9b9aa78:480d08f778984a6704becffb7417fca826ca10c3 100.74µs
2024-06-08 08:00:59 UTC [info] 200 POST /feed/webhook/v1:1717515483:aaca537b:f7541e3c75ed26bd3a075c760428d3762dc6f0d6 147.9µs
2024-06-08 08:01:01 UTC [error] /feed/webhook/v1:1717403601:678a6f7a:5a22a3d54871323c81d402bcccfebba6d4cf728f : Invalid signature
2024-06-08 08:01:01 UTC [info] 200 POST /feed/webhook/v1:1717403601:678a6f7a:5a22a3d54871323c81d402bcccfebba6d4cf728f 240.93µs
2024-06-08 08:01:04 UTC [info] 200 HEAD / 146.06µs
2024-06-08 08:01:10 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:10 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 97.92ms
2024-06-08 08:01:10 UTC [info] 302 GET / 71.52µs
2024-06-08 08:01:10 UTC [info] 200 GET /feed/popular 2.24ms
2024-06-08 08:01:13 UTC [info] 200 GET /feed/popular 4.17ms
2024-06-08 08:01:14 UTC [error] get_video: eBGIQ7ZuuiU : This helps protect our community. Learn more
2024-06-08 08:01:14 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:14 UTC [info] 500 GET /watch?v=eBGIQ7ZuuiU 54.63ms
2024-06-08 08:01:19 UTC [info] 200 GET /api/v1/stats 97.16µs
2024-06-08 08:01:22 UTC [info] 200 GET /api/v1/stats 154.04µs
2024-06-08 08:01:25 UTC [warn] i18n: Missing translation key "This helps protect our community. Learn more"
2024-06-08 08:01:25 UTC [info] 500 GET /latest_version?id=BBwi56Xj1ro&itag=22&local=true 60.33ms
just realized its a global problem. So ignore this about the internal server error. Logs when activating hosting_ping: [::]
can be seen above in the post before
Ok thanks for the investigation. I'll do my own very soon hopefully.
Hello guys, this is not really a bug, but more a setup problem we have on our instance. We run a public instance (https://invidious.yourdevice.ch) and with run it as a docker deployed instance with mutlipe containers restarting from time to time as suggested in the docs. We also added http3 proxy, and ipv6 log rotation to the setup.
When I check logs of the container nginx instance, (invidious-nginx-1 container) its full of those entries:
2024/06/03 07:57:55 [error] 29#29: *376409 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /feed/popular HTTP/1.1", upstream: "http://[2001:db9::5]:3000/feed/popular", host: "invidious.yourdevice.ch", referrer: "https://invidious.yourdevice.ch/"
An I mean really full. We dont have problems on the instance. Videos do load fast, everything plays fast. No problem at all. Still those logs bother me a bit. Seems like the nginx container cant reach the invidious containers? But if it would be so, then the server wouldnt work at all? Can you guys gimme a hint?
@unixfox @bugmaschine @perennialtech ?
This is our setup:
Nginx Reverse Proxy:
This is the nginx setup for the invidious-nginx container:
This is our docker-compose.yml
etc/docker/daemon.json
/opt/http3-ytproxy
to www-data:www-dataThanks for having a look!
Cheers guys