This is a docker container intended to run inside a kubernetes cluster to collect config maps with a specified label and store the included files in a local folder.
MIT License
612
stars
183
forks
source link
sync stop working on k8s api errors, liveness check needed? #338
When we get k8s api errors the sync stop working silently
like
calling kubernetes: (410) Reason: Expired: The resourceVersion for the provided watch is too old.
After on debuglevel you only see the msg:
Performing watch-based sync on secret resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
the msg for configmap stops:
Performing watch-based sync on configmap resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
as well as other debug messages related to configmap. We only have matching configmaps in this cluster.
It looks like that the process for configmap are dead. The process itself is still there.
Make it sense to introduce a liveness check (dead man switch like), that on problems the hole container get restartet?
When we get k8s api errors the sync stop working silently like
calling kubernetes: (410) Reason: Expired: The resourceVersion for the provided watch is too old.
After on debuglevel you only see the msg:Performing watch-based sync on secret resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
the msg for configmap stops:Performing watch-based sync on configmap resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
as well as other debug messages related to configmap. We only have matching configmaps in this cluster.It looks like that the process for configmap are dead. The process itself is still there.
Make it sense to introduce a liveness check (dead man switch like), that on problems the hole container get restartet?
container yaml: