Closed PleaseStopAsking closed 1 week ago
update: This issue is still occurring. I have now attempted to simplify the setup as much as possible by removing DNS records, Tailscale and TLS from the equation with no change unfortunately.
I installed curl and dnsutils inside the collector container manually and the output of nslookup is correct but all attempts at connecting via netcat simply fail.
As a final test, I installed the collector manually on the remote host and have zero issues sending metrics to scrutiny-web running in docker. This at least isolates the issue to the remote collector container
Closing this out as it appears to be completely related to Tailscale unfortunately. I thought I had ruled it out but clearly not. https://github.com/tailscale/tailscale/issues/12070#issuecomment-2102571116
Describe the bug I have been using scrutiny for about a year with no issues but on May 15th, I noticed that my remote collector was no longer updating metrics in my dashboard. Upon review of the logs of said collector, I discovered that all the update attempts were failing with an
i/o timeout
. I attempted to manually kick the collector off withdocker exec scrutiny-collector /opt/scrutiny/bin/scrutiny-collector-metrics run --debug
but the same failure occurs.I then attempted to send the same request manually via curl and had no issues so it appears that something within the collector image is causing this sort of behavior but that is only an assumption at this time.
On top of curl, I can connect via netcat with no issues.
nc -v 100.97.12.45 443
working curl
Attempts at fixing this:
Expected behavior remote collectors successfully send device metrics to the hub container
Log Files Collector Debug Logs
Compose
Docker Info