Twingate / helm-charts

Official Twingate Helm Charts
MIT License
13 stars 13 forks source link

Random timeout issues #53

Open bielej-oxla opened 1 week ago

bielej-oxla commented 1 week ago

We're using Twingate for some time now, recently We've started to see increased amount of I/O timeouts. The setup is as follows:

Helm chart version: 0.1.24 Connector version: v1.69.0 - v1.72.0 (depending on environment)

Logs are constantly filled with {"error":true,"status":403,"service":"Access Manager","message":"Token is expired."} messages. Log level (TWINGATE_LOG_LEVEL) is set to 3. Despite that, Twingate Control Panel claims that all connectors are online (no notifications about connectors being down).

Kubernetes claims that all 3 connectors (not replicas - different deployments) are healthy/ready (this is related to https://github.com/Twingate/helm-charts/issues/42)

Without proper health checks and with random errors in logs it's not possible to monitor the Connectors reliably. We're getting notified by our users that their GitHub Workflows are failing with I/O timeouts.

We could use some guidance on solving the timeout issue.

linear[bot] commented 1 week ago

OSS-50 Random timeout issues

bertekintw commented 1 week ago

Hello, Have you contacted our support for these random timeouts or try Twingate reddit? That will be the best route to get help for this issue.

Birol

bielej-oxla commented 1 week ago

Thanks for the reply. I have copied my issue to Reddit here: https://old.reddit.com/r/twingate/comments/1gu77u9/random_timeout_issues/