jupyter-on-openshift / jupyterhub-quickstart

OpenShift compatible version of the JupyterHub application.
Apache License 2.0
101 stars 107 forks source link

error starting server on minishift #14

Closed jarcher closed 5 years ago

jarcher commented 5 years ago

[W 2018-07-31 05:43:36.712 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [W 2018-07-31 05:43:36.712 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [W 2018-07-31 05:43:36.713 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [I 2018-07-31 05:43:36.713 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 109504.49ms [I 2018-07-31 05:43:36.714 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 76344.35ms [I 2018-07-31 05:43:36.714 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 43148.36ms [I 2018-07-31 05:43:36.715 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 10099.33ms [E 2018-07-31 05:43:48.436 JupyterHub gen:974] Exception in Future <Future finished exception=TimeoutError('pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!',)> after timeout Traceback (most recent call last): File "/opt/app-root/lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback future.result() File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/opt/app-root/lib/python3.5/site-packages/kubespawner/spawner.py", line 995, in start timeout=self.start_timeout File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/utils.py", line 155, in exponential_backoff raise TimeoutError(fail_message) TimeoutError: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!

[I 2018-07-31 05:44:18.543 JupyterHub log:158] 200 GET /hub/home (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 40.56ms [I 2018-07-31 05:44:21.278 JupyterHub log:158] 302 GET /hub/spawn -> /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 6.47ms [I 2018-07-31 05:44:21.286 JupyterHub log:158] 302 GET /user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ -> /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (@::ffff:172.17.0.1) 0.63ms [W 2018-07-31 05:44:31.300 JupyterHub base:679] User 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is slow to start (timeout=10) [I 2018-07-31 05:44:31.301 JupyterHub base:1016] 2e8572d2-7acd-4944-8cec-6b5bd7d39eea is pending spawn [I 2018-07-31 05:44:31.302 JupyterHub log:158] 200 GET /hub/user/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/ (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 10009.35ms [E 2018-07-31 05:46:20.682 JupyterHub user:474] Unhandled error starting 2e8572d2-7acd-4944-8cec-6b5bd7d39eea's server: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds! [E 2018-07-31 05:46:20.717 JupyterHub gen:974] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user..finish_user_spawn() done, defined at /opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py:619> exception=TimeoutError('pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!',)> after timeout Traceback (most recent call last): File "/opt/app-root/lib/python3.5/site-packages/tornado/gen.py", line 970, in error_callback future.result() File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 626, in finish_user_spawn await spawn_future File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 486, in spawn raise e File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/user.py", line 406, in spawn url = await gen.with_timeout(timedelta(seconds=spawner.start_timeout), f) File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 358, in iter yield self # This tells Task to wait for completion. File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 290, in _wakeup future.result() File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/opt/app-root/lib/python3.5/site-packages/kubespawner/spawner.py", line 995, in start timeout=self.start_timeout File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/opt/rh/rh-python35/root/usr/lib64/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/opt/app-root/lib/python3.5/site-packages/jupyterhub/utils.py", line 155, in exponential_backoff raise TimeoutError(fail_message) TimeoutError: pod/jupyterhub-nb-2e8572d2-2d7acd-2d4944-2d8cec-2d6b5bd7d39eea did not start in 120 seconds!

[W 2018-07-31 05:46:20.717 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [W 2018-07-31 05:46:20.718 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [W 2018-07-31 05:46:20.718 JupyterHub users:439] Stream closed while handling /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress [I 2018-07-31 05:46:20.718 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 108968.22ms [I 2018-07-31 05:46:20.719 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 75618.98ms [I 2018-07-31 05:46:20.719 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 42533.58ms [I 2018-07-31 05:46:20.720 JupyterHub log:158] 200 GET /hub/api/users/2e8572d2-7acd-4944-8cec-6b5bd7d39eea/server/progress (2e8572d2-7acd-4944-8cec-6b5bd7d39eea@::ffff:172.17.0.1) 9426.51ms 05:46:25.527 - info: [ConfigProxy] 200 GET /api/routes [I 2018-07-31 05:46:25.529 JupyterHub proxy:301] Checking routes [I 2018-07-31 05:51:25.530 JupyterHub proxy:301] Checking routes 05:51:25.528 - info: [ConfigProxy] 200 GET /api/routes 05:56:25.527 - info: [ConfigProxy] 200 GET /api/routes [I 2018-07-31 05:56:25.529 JupyterHub proxy:301] Checking routes 06:01:25.526 - info: [ConfigProxy] 200 GET /api/routes [I 2018-07-31 06:01:25.528 JupyterHub proxy:301] Checking routes

GrahamDumpleton commented 5 years ago

This indicates that your Jupyter notebook instances are not starting up with the 2 minute default timeout.

You would need to check the events for the project to work out if there is a specific reason.

At a guess I would say that the reason is that the OpenShift cluster you are using doesn't have the imagePolicy plugin correctly configured. This has especially been a problem with clusters which have been upgraded from an older version. A Minishift environment is also currently having this problem, because automatic setup of imagePolicy was removed in oc cluster up and Minishift wasn't adding it where it needs to.

An issue describing this for Minishift can be found at:

The same issues can arise for a full Origin/OCP cluster as well though.

It is necessary to confirm that the master config file for the cluster is missing the required imagePolicy configuration and add it if missing.

GrahamDumpleton commented 5 years ago

Closing on assumption that nothing to be done.