There might be more than one thing going on here, so happy to separate into multiple issues if appropriate.
EDIT- there are two things going on indeed: see this issue related to JupyterLab and failing to avoid culling active Lab servers.
This issue should focus on idle-culler and the problem of configuring to avoid culling named servers.
I am seeing named servers culled after the timeout interval when that is set to false. This occurs with terminal processes running and the browser/client disconnected (i.e., on overnight jobs).
My config.yaml file used to install the official helm chart (v0.11.1) has the following cull configs:
Bug description
There might be more than one thing going on here, so happy to separate into multiple issues if appropriate.
EDIT- there are two things going on indeed: see this issue related to JupyterLab and failing to avoid culling active Lab servers. This issue should focus on
idle-culler
and the problem of configuring to avoid culling named servers.I am seeing named servers culled after the timeout interval when that is set to false. This occurs with terminal processes running and the browser/client disconnected (i.e., on overnight jobs).
My
config.yaml
file used to install the official helm chart (v0.11.1) has the following cull configs:The hub logs confirm this:
Expected behaviour
Named servers would not be culled ever.
Servers with running terminal processes would not be culled.
Actual behaviour
After the configured interval, the culler kills the pods.
How to reproduce
See above snippet for the relevant portions of the
config.yaml
file.Your personal set up
Running on AWS EKS with helm chart v0.11.1.
The
singleUser
image uses the base imagejupyter/datascience-notebook:python-3.8.8
Other images in helm:
Full environment
Configuration
Logs