We're using Twingate for some time now, recently We've started to see increased amount of I/O timeouts.
The setup is as follows:
GitHub actions running from bare metal runner with Docker
Headless Twingate Connector configured using echo $INPUT_SERVICE_KEY | sudo twingate setup --headless=- (the issue is also relevant for the non-headless clients)
Extra checks for online status
Connections are made to EKS cluster with Private Kube API
Often these connections result in dial tcp 100.109.XXX.XXX:443: i/o timeout
Logs are constantly filled with {"error":true,"status":403,"service":"Access Manager","message":"Token is expired."} messages.
Log level (TWINGATE_LOG_LEVEL) is set to 3.
Despite that, Twingate Control Panel claims that all connectors are online (no notifications about connectors being down).
Without proper health checks and with random errors in logs it's not possible to monitor the Connectors reliably.
We're getting notified by our users that their GitHub Workflows are failing with I/O timeouts.
We could use some guidance on solving the timeout issue.
We're using Twingate for some time now, recently We've started to see increased amount of I/O timeouts. The setup is as follows:
echo $INPUT_SERVICE_KEY | sudo twingate setup --headless=-
(the issue is also relevant for the non-headless clients)online
statusdial tcp 100.109.XXX.XXX:443: i/o timeout
Helm chart version: 0.1.24 Connector version: v1.69.0 - v1.72.0 (depending on environment)
Logs are constantly filled with
{"error":true,"status":403,"service":"Access Manager","message":"Token is expired."}
messages. Log level (TWINGATE_LOG_LEVEL
) is set to 3. Despite that, Twingate Control Panel claims that all connectors areonline
(no notifications about connectors being down).Kubernetes claims that all 3 connectors (not replicas - different deployments) are healthy/ready (this is related to https://github.com/Twingate/helm-charts/issues/42)
Without proper health checks and with random errors in logs it's not possible to monitor the Connectors reliably. We're getting notified by our users that their GitHub Workflows are failing with I/O timeouts.
We could use some guidance on solving the timeout issue.