Closed s3than closed 4 months ago
Digging into cacheable-lookup
see if I can see something
Just to confirm the only difference that makes sense is the cacheable-lookup
addition
https://github.com/Fallenbagel/jellyseerr/compare/v1.9.0...v1.9.1
I tried adding trace
for the logs but didn't get any additional information..
Maybe if we could set https://www.npmjs.com/package/cacheable-lookup#servers I could check to see if it's somehow defaulting away from the k8s dns
Alternative cache is handled within the CoreDNS of the kubernetes cluster so being able to disable the cacheable lookup would possibly work as well..
Maybe if we could set https://www.npmjs.com/package/cacheable-lookup#servers I could check to see if it's somehow defaulting away from the k8s dns
I can push a preview so you can set the DNS servers of cacheable-lookup
to try to debug your config, but the addition of cacheable-lookup
should not have changed the DNS server resolution since it is using the same mechanism from Node.js.
The goal of cacheable-lookup
is to respect the TTL of the DNS entries because Node.js don't do it, so just disabling this should not be the solution.
I just added a setting to overwrite the DNS servers in the test-custom-dns-servers
branch, or in the preview-dns-servers
tag.
@s3than could you test it and try if you can debug your config to see where the issue is coming from?
New image no settings changes
2024-06-18T20:22:16.145Z [error][Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: connect ETIMEDOUT 199.115.116.216:8096
New image DNS changed
2024-06-18T20:22:16.145Z [error][Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: connect ETIMEDOUT 199.115.116.216:8096
I then changed the address for Jellyfin to a full DNS name and reverted the setting for DNS addresses...
https://jelly-local.tcolbert.net
this works..
http://jellyfin.media:8096
this does not
This does..
http://jellyfin.media.svc.cluster.local:8096
The resolv.conf on the container shows
k exec -it jellyseerr-67c4b5bd78-pthxs -- cat /etc/resolv.conf
search arrs.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
Additional I don't have any idea what the ip 199.115.116.216
is or where it's coming from..
I can confirm that with or without the server change it appears to contact the DNS server at 10.43.0.10 so you can drop this change..
I'm happy enough to use the full length DNS name for the cluster at the moment.. I'll see if I can find anything in the cacheable library that would cause this..
Maybe a note somewhere that tells people to use the fully qualified DNS in kubernetes would be okay to start with?
Additional investigation
The container itself can find it no problems..
k exec -it jellyseerr-67c4b5bd78-pthxs -- ping jellyfin.media
PING jellyfin.media (10.43.138.175): 56 data bytes
Additional strangeness..
The DNS address for Radarr and Sonarr are
http://radarr:xxxx and http://sonarr:xxxx
And this works as expected :|
Maybe a note somewhere that tells people to use the fully qualified DNS in kubernetes would be okay to start with?
Actually I am not too happy with the cacheable lookups implementation as it seems to cause more critical harm than fixing a not that critical issue of dns spamming. So we're currently looking into removing and replacing it
Thanks for investigating and on the work you are doing in this project.
:tada: This issue has been resolved in version 2.0.0 :tada:
The release is available on:
v2.0.0
Your semantic-release bot :package::rocket:
Description
With the version 1.9.0 the following log is recorded
With the version after starting at 1.9.1 I get the following log
The internal DNS address is
jellyfin.media:8096
Version
1.9.1+
Steps to Reproduce
Screenshots
No response
Logs
No response
Platform
desktop
Device
All
Operating System
linux
Browser
Firefox
Additional Context
No response
Code of Conduct