Open bielej-oxla opened 1 week ago
Hello, Have you contacted our support for these random timeouts or try Twingate reddit? That will be the best route to get help for this issue.
Birol
Thanks for the reply. I have copied my issue to Reddit here: https://old.reddit.com/r/twingate/comments/1gu77u9/random_timeout_issues/
We're using Twingate for some time now, recently We've started to see increased amount of I/O timeouts. The setup is as follows:
echo $INPUT_SERVICE_KEY | sudo twingate setup --headless=-
(the issue is also relevant for the non-headless clients)online
statusdial tcp 100.109.XXX.XXX:443: i/o timeout
Helm chart version: 0.1.24 Connector version: v1.69.0 - v1.72.0 (depending on environment)
Logs are constantly filled with
{"error":true,"status":403,"service":"Access Manager","message":"Token is expired."}
messages. Log level (TWINGATE_LOG_LEVEL
) is set to 3. Despite that, Twingate Control Panel claims that all connectors areonline
(no notifications about connectors being down).Kubernetes claims that all 3 connectors (not replicas - different deployments) are healthy/ready (this is related to https://github.com/Twingate/helm-charts/issues/42)
Without proper health checks and with random errors in logs it's not possible to monitor the Connectors reliably. We're getting notified by our users that their GitHub Workflows are failing with I/O timeouts.
We could use some guidance on solving the timeout issue.