Closed ghost closed 2 weeks ago
Thank you for the report. I haven't seen this before on GKE or on a local deployment. On first sight it looks a bit like a bug in the fabric8 kuberntes client, because I don't think a watch should just crash like this. In the upcoming 0.9 release we will update the fabric8 client from version 5.x to 6.x, so maybe this will help.
Besides that we may try to restart the watch in onClose rather than stopping the application
I think the initial reason for stopping the operator when we get an exception from a watch was, that we might have missed events. On a restart we would check all resources from scratch
@qiaozhi92 Could you update your deployment to version 0.9.1 and check if this error still occurs? We experienced this before, but since the update we haven't had any issues.
This issue is stale because it has been open for 180 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Describe the bug
The operator pod restarts roughly every hour.
Log before operator pod restart:
Expected behavior
The operator pod should not be restarted often. Just like service and landing-page pod.
Cluster provider
No response
Version
theia-cloud 0.8.0
Additional information
No response