jupyterhub / zero-to-jupyterhub-k8s

Helm Chart & Documentation for deploying JupyterHub on Kubernetes
https://zero-to-jupyterhub.readthedocs.io
Other
1.52k stars 791 forks source link

jhub on microk8s #1189

Closed mfhm closed 3 years ago

mfhm commented 5 years ago

I'm trying to install Jupyterhub on a local machine with ubuntu 18.04 using microk8s. Following the documentation, I successfully set up the helm. My config.yaml contains only the secret for the http-proxy, as the documentation necessitates it for the setup of the Jupyterhub itself. After installing the Jupyterhub, the hub pod stays in a pending state. Here is some information:

Name:               hub-756b8bb5c8-f7tbn
Namespace:          jhub
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=jupyterhub
                    component=hub
                    hub.jupyter.org/network-access-proxy-api=true
                    hub.jupyter.org/network-access-proxy-http=true
                    hub.jupyter.org/network-access-singleuser=true
                    pod-template-hash=756b8bb5c8
                    release=jhub
Annotations:        checksum/config-map: 9c6f8bf453fdb87c03dfadf04b3f419328180b3ce15040756cddfed52b9cdd0e
                    checksum/secret: 37af4a038e071dd3bf6caa955222f11abd09c7566be9d9fefc69f00d3bf28ad6
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/hub-756b8bb5c8
Containers:
  hub:
    Image:      jupyterhub/k8s-hub:0.8.0
    Port:       8081/TCP
    Host Port:  0/TCP
    Command:
      jupyterhub
      --config
      /srv/jupyterhub_config.py
      --upgrade-db
    Requests:
      cpu:     200m
      memory:  512Mi
    Environment:
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jhub
      POD_NAMESPACE:           jhub (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'proxy.token' in secret 'hub-secret'>  Optional: false
    Mounts:
      /etc/jupyterhub/config/ from config (rw)
      /etc/jupyterhub/secret/ from secret (rw)
      /srv/jupyterhub from hub-db-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from hub-token-mngg6 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub-config
    Optional:  false
  secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-secret
    Optional:    false
  hub-db-dir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  hub-db-dir
    ReadOnly:   false
  hub-token-mngg6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-token-mngg6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  29s (x6 over 3m58s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

Name:               proxy-7b574749d8-lzxcd
Namespace:          jhub
Priority:           0
PriorityClassName:  <none>
Node:               fk06-jupyter/10.20.63.207
Start Time:         Tue, 12 Mar 2019 17:36:18 +0100
Labels:             app=jupyterhub
                    component=proxy
                    hub.jupyter.org/network-access-hub=true
                    hub.jupyter.org/network-access-singleuser=true
                    pod-template-hash=7b574749d8
                    release=jhub
Annotations:        checksum/hub-secret: 2e51e3730e653e8a5cb60fe89f3c17afe7dca3bd109c25772e469b95f8ea74c0
                    checksum/proxy-secret: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
Status:             Running
IP:                 10.1.1.142
Controlled By:      ReplicaSet/proxy-7b574749d8
Containers:
  chp:
    Container ID:  docker://fdace9264cffe1fb65678674df081e2364fec1d5f2588ec39040d5dba4f59aa0
    Image:         jupyterhub/configurable-http-proxy:3.0.0
    Image ID:      docker-pullable://jupyterhub/configurable-http-proxy@sha256:c36cf3cc1c99f59348a8d6f5f64752df3eb4d88df93a11bc4e00acf23dbecfba
    Ports:         8000/TCP, 8001/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      configurable-http-proxy
      --ip=0.0.0.0
      --api-ip=0.0.0.0
      --api-port=8001
      --default-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
      --error-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)/hub/error
      --port=8000
    State:          Running
      Started:      Tue, 12 Mar 2019 17:36:26 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     200m
      memory:  512Mi
    Environment:
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'proxy.token' in secret 'hub-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gt4wd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-gt4wd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gt4wd
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason             Age                  From                   Message
  ----     ------             ----                 ----                   -------
  Normal   Scheduled          3m58s                default-scheduler      Successfully assigned jhub/proxy-7b574749d8-lzxcd to fk06-jupyter
  Normal   Pulling            3m57s                kubelet, fk06-jupyter  pulling image "jupyterhub/configurable-http-proxy:3.0.0"
  Normal   Pulled             3m51s                kubelet, fk06-jupyter  Successfully pulled image "jupyterhub/configurable-http-proxy:3.0.0"
  Normal   Created            3m51s                kubelet, fk06-jupyter  Created container
  Normal   Started            3m50s                kubelet, fk06-jupyter  Started container
  Warning  MissingClusterDNS  28s (x7 over 3m58s)  kubelet, fk06-jupyter  pod: "proxy-7b574749d8-lzxcd_jhub(f6323afc-44e4-11e9-9685-00505693673f)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.

As someone pointed out on the gitter, the issue is caused by not provided volume. Here is some more output:

$ kubectl describe pvc --namespace jhub
Name:          hub-db-dir
Namespace:     jhub
StorageClass:  
Status:        Pending
Volume:        
Labels:        app=jupyterhub
               chart=jupyterhub-0.8.0
               component=hub
               heritage=Tiller
               release=jhub
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason         Age                  From                         Message
  ----       ------         ----                 ----                         -------
  Normal     FailedBinding  111s (x43 over 11m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
Mounted By:  hub-756b8bb5c8-f7tbn

kubectl describe pv --namespace jhub and kubectl describe sc --namespace jhub give no output.

I do not have any experience with k8s. Do I have to explicitely create StorageClass and PersistentVolume? Or some further PVC?

In microk8s you can load a default StorageClass with microk8s.enable storage. Here the description of the StorageClass:

$ kubectl describe sc 
Name:            microk8s-hostpath
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           microk8s.io/hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

However, this does not help: after helm upgrade the hub is still pending.

I hope, somebody can help me with this issue. I will provide any additional information. I'm not very familiar with k8s, therefore, please, describe exact commands, which need to be executed.

Thanks!

manics commented 5 years ago

It sounds like microk8s requires some specific configuration. If you're new to Kubernetes you could try Minikube instead which is used for testing this project so has a known working configuration.

Alternatively can you check whether dynamic volumes are working? See https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/

Another option if this is just for testing is to disable persistent volumes and use in-memory storage (will be lost when the hub is stopped or restarted):

hub:
  cookieSecret: ...
  db:
    type: sqlite-memory

singleuser:
  storage:
    type: none
consideRatio commented 5 years ago

The key is the following: no persistent volumes available for this claim and no storage class is set.

When we create PersistenceVolumeClaims (PVCs), we want kubernetes to provision a PeristantVolume for us often. And, the kind of storage is to be provided by the Kubernetes cluster is defined by a default StorageClass or an explicitly set StorageClass.

What is happening here, I think, is that microk8s isnt setup with a StorageClass or some way to allocate/provide storage requested by PVCs. This needs to be in place, OR, be worked around like @manics suggested by simply avoiding needing any storage at all.

So, how to setup a storageclass for microk8s somehow, to allow it to provision storage? I don't know, but that is the key on what to google.

hickst commented 5 years ago

I just ran into exactly the same issue. As @mfhm mentioned, in microk8s you can load a default StorageClass with microk8s.enable storage, so I did the following:

helm delete jhub --purge
microk8s.enable storage

so that kubectl get sc returns:

NAME                          PROVISIONER            AGE
microk8s-hostpath (default)   microk8s.io/hostpath   175m

I then re-installed JH, per instructions. This allowed most Z2JH components to start correctly except for the proxy-public LoadBalancer service, which remains in Pending state:

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/proxy-public   LoadBalancer   10.152.183.121   <pending>     80:31188/TCP,443:32721/TCP   61m

Any thoughts on why this one service is still hanging?

hickst commented 5 years ago

I just filed an issue (#354) with the micro8ks folks and am curious to see what they might have to say about what we might be doing wrong.

manics commented 5 years ago

@hickst Does micro-k8s include a default LoadBalancer? If it doesn't you could try switching the service to a ClusterIP with an ingress, or a NodePort?

mfhm commented 5 years ago

As @hickst pointed out, microk8s.enable storage needs to be executed prior to installing the JH via Helm. Preivously I just tried helm upgrade the JH. So now both pods, the hub and the proxy, are working. The external-ip is pending, however the hub can be accessed via port 31423 in my case.

$  kubectl --namespace=jhub get svc proxy-public
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
proxy-public   LoadBalancer   10.152.183.247   <pending>     80:31423/TCP,443:31262/TCP   61s

Here is my config.yaml:

proxy:
  secretToken: <SECRET>
#  service:
#    type: NodePort
#    nodePorts:
#      http: 30229
#  chp:
#    resources:
#      requests:
#        memory: 0
#        cpu: 0

singleuser:
  image:
    name: jupyter/scipy-notebook
    tag: 7db1bd2a7511
#  storage:
#    type: none
  memory:
    limit: 2G
    guarantee: 512M
  cpu:
    limit: 2
    guarantee: 0.5

prePuller:
  hook:
    enabled: true

#scheduling:
#  userScheduler:
#    enabled: true
#  podPriority:
#    enabled: true
#  userPlaceholder:
#    enabled: true
#    replicas: 4
#  userPods:
#    nodeAffinity:
#      matchNodePurpose: require

cull:
  enabled: true
  timeout: 3600
  every: 300

#debug:
#  enabled: true

auth:
  type: dummy
  dummy:
    password: <SECRET>
  admin:
    users:
      - max
  whitelist:
    users:
      - max
      - user1000
      - user2000
      - user1

So, I can login with all defined users, the password is required, however no notebook can be spawned. I get the following log:

Spawn failed: (422) Reason: error HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Wed, 13 Mar 2019 16:29:45 GMT', 'Content-Length': '436'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"jupyter-max\" is invalid: spec.initContainers[0].securityContext.privileged: Forbidden: disallowed by cluster policy","reason":"Invalid","details":{"name":"jupyter-max","kind":"Pod","causes":[{"reason":"FieldValueForbidden","message":"Forbidden: disallowed by cluster policy","field":"spec.initContainers[0].securityContext.privileged"}]},"code":422}

Does anyone have an idea?

manics commented 5 years ago

This might be related to https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/38ae89ef6827ce49b6bf1081ac049b7ee78f1e95/images/hub/jupyterhub_config.py#L419 which prevents users accessing virtual machine metadata when running on public clouds https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html#audit-cloud-metadata-server-access. You could try enabling access (which disables the init container) with the following

singleuser:
  cloudMetadata:
    enabled: true
consideRatio commented 5 years ago

I would first see if there is a PodSecurityPolicy, and if you could kubectl edit podsecuritypolicy <name> it to allow it, or rather: could you see if that is possible for you as im curious :D https://kubernetes.io/docs/concepts/policy/pod-security-policy/

You would maybe need to find the name first by using kubectl get podsecuritypolicy

mfhm commented 5 years ago

@consideRatio There is no PodSecurityPolicy defined

$ kubectl get podsecuritypolicy
No resources found.

Does it mean, that no constrictions are defined? Should I define some policy? If so, can you give me an example? These are so many options, which I do not understand...

However, the @manics suggestion helped to make a small step :) Now a new pod is started, but it can't be reached and gets closed:

Loading /etc/jupyterhub/config/values.yaml
Loading /etc/jupyterhub/secret/values.yaml
[I 2019-03-14 11:16:26.462 JupyterHub app:1673] Using Authenticator: dummyauthenticator.dummyauthenticator.DummyAuthenticator
[I 2019-03-14 11:16:26.463 JupyterHub app:1673] Using Spawner: kubespawner.spawner.KubeSpawner
[I 2019-03-14 11:16:26.465 JupyterHub app:1055] Writing cookie_secret to /srv/jupyterhub/jupyterhub_cookie_secret
[I 2019-03-14 11:16:26.492 alembic.runtime.migration migration:130] Context impl SQLiteImpl.
[I 2019-03-14 11:16:26.493 alembic.runtime.migration migration:137] Will assume non-transactional DDL.
[I 2019-03-14 11:16:26.517 alembic.runtime.migration migration:356] Running stamp_revision  -> 896818069c98
[I 2019-03-14 11:16:26.543 alembic.runtime.migration migration:130] Context impl SQLiteImpl.
[I 2019-03-14 11:16:26.543 alembic.runtime.migration migration:137] Will assume non-transactional DDL.
[W 2019-03-14 11:16:26.762 JupyterHub app:1131] JupyterHub.hub_connect_port is deprecated as of 0.9. Use JupyterHub.hub_connect_url to fully specify the URL for connecting to the Hub.
[I 2019-03-14 11:16:26.894 JupyterHub app:1855] Hub API listening on http://0.0.0.0:8081/hub/
[I 2019-03-14 11:16:26.894 JupyterHub app:1857] Private Hub API connect url http://10.152.183.102:8081/hub/
[I 2019-03-14 11:16:26.894 JupyterHub app:1870] Not starting proxy
[I 2019-03-14 11:16:26.894 JupyterHub app:1876] Starting managed service cull-idle
[I 2019-03-14 11:16:26.895 JupyterHub service:302] Starting service 'cull-idle': ['/usr/local/bin/cull_idle_servers.py', '--url=http://127.0.0.1:8081/hub/api', '--timeout=3600', '--cull-every=300', '--concurrency=10']
[I 2019-03-14 11:16:26.898 JupyterHub service:114] Spawning /usr/local/bin/cull_idle_servers.py --url=http://127.0.0.1:8081/hub/api --timeout=3600 --cull-every=300 --concurrency=10
[I 2019-03-14 11:16:26.931 JupyterHub proxy:301] Checking routes
[I 2019-03-14 11:16:26.931 JupyterHub proxy:370] Adding default route for Hub: / => http://10.152.183.102:8081
[I 2019-03-14 11:16:26.937 JupyterHub app:1912] JupyterHub is now running at http://10.152.183.44:80/
[I 2019-03-14 11:16:27.196 JupyterHub log:158] 200 GET /hub/api/users (cull-idle@127.0.0.1) 53.61ms
[I 2019-03-14 11:17:04.206 JupyterHub log:158] 302 GET / -> /hub (@10.1.1.1) 2.89ms
[I 2019-03-14 11:17:04.224 JupyterHub log:158] 302 GET /hub -> /hub/ (@10.1.1.1) 2.34ms
[W 2019-03-14 11:17:04.237 JupyterHub base:242] Invalid or expired cookie token
[I 2019-03-14 11:17:04.240 JupyterHub log:158] 302 GET /hub/ -> /hub/login (@10.1.1.1) 4.55ms
[I 2019-03-14 11:17:04.334 JupyterHub log:158] 200 GET /hub/login (@10.1.1.1) 78.03ms
[I 2019-03-14 11:17:10.918 JupyterHub base:499] User logged in: user1000
[I 2019-03-14 11:17:10.920 JupyterHub log:158] 302 POST /hub/login?next= -> /user/user1000/ (user1000@10.1.1.1) 10.83ms
[I 2019-03-14 11:17:10.931 JupyterHub log:158] 302 GET /user/user1000/ -> /hub/user/user1000/ (@10.1.1.1) 1.04ms
[I 2019-03-14 11:17:11.032 JupyterHub reflector:199] watching for pods with label selector='component=singleuser-server' in namespace jhub
[I 2019-03-14 11:17:11.072 JupyterHub reflector:199] watching for events with field selector='involvedObject.kind=Pod' in namespace jhub
[W 2019-03-14 11:17:11.193 JupyterHub base:714] User user1000 is slow to start (timeout=0)
[I 2019-03-14 11:17:11.194 JupyterHub base:1056] user1000 is pending spawn
[I 2019-03-14 11:17:11.200 JupyterHub log:158] 200 GET /hub/user/user1000/ (user1000@10.1.1.1) 255.21ms
[I 2019-03-14 11:17:14.266 JupyterHub log:158] 200 GET /hub/api (@10.1.1.1) 2.88ms
[I 2019-03-14 11:17:26.959 JupyterHub proxy:301] Checking routes
[W 2019-03-14 11:17:55.733 JupyterHub user:510] user1000's server never showed up at http://10.1.1.20:8888/user/user1000/ after 30 seconds. Giving up
[I 2019-03-14 11:17:55.735 JupyterHub spawner:1758] Deleting pod jupyter-user1000
[E 2019-03-14 11:18:07.979 JupyterHub gen:974] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py:619> exception=TimeoutError("Server at http://10.1.1.20:8888/user/user1000/ didn't respond in 30 seconds",)> after timeout
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 970, in error_callback
        future.result()
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py", line 626, in finish_user_spawn
        await spawn_future
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 528, in spawn
        raise e
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 502, in spawn
        resp = await server.wait_up(http=True, timeout=spawner.http_timeout)
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/utils.py", line 197, in wait_for_http_server
        timeout=timeout
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/utils.py", line 155, in exponential_backoff
        raise TimeoutError(fail_message)
    TimeoutError: Server at http://10.1.1.20:8888/user/user1000/ didn't respond in 30 seconds

[I 2019-03-14 11:18:08.004 JupyterHub log:158] 200 GET /hub/api/users/user1000/server/progress (user1000@10.1.1.1) 56585.87ms
etheleon commented 5 years ago

Facing the same issue

manics commented 5 years ago

Can you post the logs from the failed singleuser server pod?

etheleon commented 5 years ago
(|microk8s:kube-system)➜  microjupyterhub microk8s.status
microk8s is running
addons:
ingress: disabled
dns: enabled
metrics-server: disabled
prometheus: disabled
istio: disabled
jaeger: disabled
fluentd: disabled
gpu: enabled
storage: enabled
dashboard: disabled
registry: enabled

single user's pod

(|microk8s:jhub)➜  microjupyterhub k logs -f jupyter-etheleon
Set username to: jovyan
usermod: no changes
Granting jovyan sudo access and appending /opt/conda/bin to sudo PATH
Executing the command: jupyterhub-singleuser --ip=0.0.0.0
[W 2019-05-02 09:59:22.074 SingleUserNotebookApp configurable:168] Config option `open_browser` not recognized by `SingleUserNotebookApp`.  Did you mean `browser`?
[I 2019-05-02 09:59:22.216 SingleUserNotebookApp extension:168] JupyterLab extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 2019-05-02 09:59:22.216 SingleUserNotebookApp extension:169] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 2019-05-02 09:59:22.217 SingleUserNotebookApp singleuser:406] Starting jupyterhub-singleuser server version 0.9.4
[I 2019-05-02 09:59:22.221 SingleUserNotebookApp notebookapp:1712] Serving notebooks from local directory: /home/jovyan
[I 2019-05-02 09:59:22.221 SingleUserNotebookApp notebookapp:1712] The Jupyter Notebook is running at:
[I 2019-05-02 09:59:22.221 SingleUserNotebookApp notebookapp:1712] http://(jupyter-etheleon or 127.0.0.1):8888/user/etheleon/
[I 2019-05-02 09:59:22.221 SingleUserNotebookApp notebookapp:1713] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).

hub pod's logs

upyterhub/handlers/base.py:619> exception=TimeoutError("Server at http://10.1.1.58:8888/user/etheleon/ didn't respond in 30 seconds",)> after timeout
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 970, in error_callback
        future.result()
      File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 970, in error_callback
        future.result()
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py", line 626, in finish_user_spawn
        await spawn_future
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 528, in spawn
        raise e
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 502, in spawn
        resp = await server.wait_up(http=True, timeout=spawner.http_timeout)
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/utils.py", line 197, in wait_for_http_server
        timeout=timeout
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/utils.py", line 155, in exponential_backoff
        raise TimeoutError(fail_message)
    TimeoutError: Server at http://10.1.1.58:8888/user/etheleon/ didn't respond in 30 seconds

[I 2019-05-02 10:00:07.922 JupyterHub log:158] 200 GET /hub/api/users/etheleon/server/progress (etheleon@127.0.0.1) 47785.98ms
[I 2019-05-02 10:00:50.487 JupyterHub proxy:301] Checking routes

PVC is fine

(|microk8s:jhub)➜  microjupyterhub k get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
claim-etheleon   Bound     pvc-f4421bd8-6cc0-11e9-9b25-1c1b0de95065   10Gi       RWO            microk8s-hostpath   2m
hub-db-dir       Bound     pvc-bc1e8c94-6cc0-11e9-9b25-1c1b0de95065   1Gi        RWO            microk8s-hostpath   4m

From the UI:

Spawn failed: Server at http://10.1.1.58:8888/user/etheleon/ didn't respond in 30 seconds
nikhilkrishna commented 5 years ago

I am getting this as well on microk8s - Other than enabling storage I don't have anything else enabled on microk8s This is the log that is displayed when I log in and try to start a server

`Server requested 2019-05-15 12:32:58+00:00 [Normal] Successfully assigned jhub/jupyter-test to orca

2019-05-15 12:33:00+00:00 [Normal] Container image "jupyterhub/k8s-network-tools:0.8.0" already present on machine

2019-05-15 12:33:03+00:00 [Normal] Created container block-cloud-metadata

2019-05-15 12:33:03+00:00 [Normal] Started container block-cloud-metadata

2019-05-15 12:33:05+00:00 [Normal] Container image "jupyterhub/k8s-singleuser-sample:0.8.0" already present on machine

2019-05-15 12:33:07+00:00 [Normal] Created container notebook

2019-05-15 12:33:07+00:00 [Normal] Started container notebook

Spawn failed: Server at http://10.1.1.31:8888/user/test/ didn't respond in 30 seconds`

alexkreidler commented 4 years ago

Any updates on this?

sbrunk commented 4 years ago

Microk8s doesn't allow priviliged containers in its default config.

I could solve the spawning failure by setting cloudMetadata to true as described above in https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1189#issuecomment-472601915 by @manics.

rnestler commented 4 years ago

Regarding the External-IP pending state, I fixed that one by enabling the metallb component: microk8s enable metallb.

meeseeksmachine commented 4 years ago

This issue has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/image-build-but-not-launching/4034/2

ghost commented 4 years ago

I have the same issue and I already posted my comment on an another issue related to this

Please see :

https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/937#issuecomment-650642756

consideRatio commented 3 years ago

Summary

I don't know microk8s, but i've come to associate the following common issues being reported.

If someone end up here reading, also read https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1189, and note that I've documented the wish to have some guiding help on bare metal clusters in my issue triage report.