Open gwenael-lebarzic opened 1 month ago
Hello. I don't know if someone could check the problem ?
Up
Same for me
Hello.
Is it possible to get a status on this behaviour please ?
Best regards.
Started to observe the same issue once our cluster reached 5k secrets in all namespaces. My guess is that Kubernetes API paginates responses ListSecretForAllNamespacesAsync and code is not handling pagination.
Started to observe the same issue once our cluster reached 5k secrets in all namespaces. My guess is that Kubernetes API paginates responses ListSecretForAllNamespacesAsync and code is not handling pagination.
In my kubernetes cluster, where we have this problem, we have a total of 62 secrets.
Started to observe the same issue once our cluster reached 5k secrets in all namespaces. My guess is that Kubernetes API paginates responses ListSecretForAllNamespacesAsync and code is not handling pagination.
I have 4 secrets in my cluster and the issue exists.
Did you try to set watcher timeout ?
Hello.
As issue #341 is closed, I open a new one.
As described in #341 , we encountered the same problem the 7th of October 2024, Reflector stopped replicating secrets. Reflector did not log anything anymore (neither namespace watcher, configmap watcher or secret watcher).
Here is the end of the log :
After this time, there was no log at all anymore. Concerning the metrics, the pod reflector cpu was almost zero (seems normal because it wasn't doing anything anymore. Nothing specific about the memory usage just before the incident.
Here are informations about version :
emberstack/kubernetes-reflector:7.1.256
GKE - 1.29.8-gke.1096000
GCP
Is it possible to solve this problem please ? It makes reflector solution unstable unfortunately :(.