jupyterhub / mybinder.org-deploy

Deployment config files for mybinder.org
https://mybinder-sre.readthedocs.io/en/latest/index.html
BSD 3-Clause "New" or "Revised" License
76 stars 74 forks source link

GESIS BinderHub server was accumulating Running pods that were more than 1 day old #2686

Open rgaiacs opened 1 year ago

rgaiacs commented 1 year ago

Around 2023-06-21 17:15 CEST, we launch a cron job to avoid this problem.

Screenshot 2023-06-21 at 17-21-03 1  Overview - Dashboards - Grafana

Need further investigation to discover the source of the problem.

minrk commented 1 year ago

OVH is seeing this, too. I suspect it's a recent update to jupyterhub/zero-to-jupyterhub that's causing something to get missed.

Two categories of problem to track down:

  1. JupyterHub/KubeSpawner is leaving orphan pods (i.e. the pod is running but jupyterhub doesn't have a record of it). symptom: the pods older than 6 hours do not have an associated user.
  2. max-age culling is not working properly. This could be because of a bug in start-time reporting from jupyterhub, or a bug in jupyterhub-idle-culler not actually performing the max age culling for some reason
rgaiacs commented 1 year ago

Thanks for the information covering OVH.

minrk commented 1 year ago

I've looked through some logs, and OVH definitely has quite a few orphan pods. So I think this is a change in kubespawner that's making it possible to leave orphaned pods, likely failing to clean up after a failed start (hard to say precisely, because OVH has no log retention, so we can only look back into the very recent past). OVH is showing occasional reflector failure events, which may well be related, because deleting a pod not in the reflector will skip the deletion.

Unfortunately, JupyterHub doesn't give Spawners a hook to look for orphaned resources.

Here's a notebook to collect and view (and clean up, if you want) orphaned pods on a cluster.