Closed ospiegel91 closed 2 years ago
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
Logs of jupyterhub showing that the pods terminated by the culler are essential.
kubectl logs deploy/hub
@consideRatio thank you for the prompt response.
cull logs continuosly receive activity from ospiegel notebook until,
[W 2022-06-01 20:11:48.877 JupyterHub app:2151] User ospiegel91 server stopped with exit code: 1
to replicate:
while sleep 60; do echo “60 seconds passed”; done
The notebook lives past the tab exit. But exists randomly after.
It seems your server exits, without involvement from the culler.
You need to inspects its logs, why did the exit code become 1 etc? You should not see a warning, and a notice about the culler culling if it was the culler.
Please refer to discourse.jupyter.org for further help at this point.
@consideRatio thanks again, Would I see the logs containing what is causing error code 1 at the singleUser pod level?
I expect you to find logs for the user pod started by kubespawner under a pod named jupyter-<username>
. So, kubectl logs <username of pod>
, where you can also add --previous
if the container has restarted and you want to see the logs of the container before restarting.
I hope this helps you track down whats going on, good luck!
Bug description
Using a long cull timeout of 3 days, Jupyterhub singleUser pods get terminated after short period of time.
Expected behaviour
Considering a long cull timeout I would expect the singleUser pod to remain alive for at least that specified timeout duration
Actual behaviour
Jupyterlab singleuser pod is terminated unless the user is actively on the broswer tab session engaging with it.
How to reproduce
use these cull settings
using the helm chart https://jupyterhub.github.io/helm-chart version 1.2.0
Your personal set up
using the helm chart https://jupyterhub.github.io/helm-chart version 1.2.0, on an EKS cluster
Configuration
```python # jupyterhub_config.py ```Logs
``` # paste relevant logs here, if any ```