Unable to install ngshare on z2jh 2.0.0 on bare metal (microk8s). By clicking on Services > ngshare returns 503 service unavailable. Your server appears to be down. Try restarting it from the hub.
kubectl get pod, shows ngshare pod to be in pending status.
The kubectl describe pod returns the following output:
Name: ngshare-78f46fcf5-r9h8r
Namespace: dskube
Priority: 0
Service Account: default
Node:
Labels: app.kubernetes.io/instance=ngshare
app.kubernetes.io/name=ngshare
hub.jupyter.org/network-access-hub=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=78f46fcf5
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/ngshare-78f46fcf5
Containers:
ngshare:
Image: libretexts/ngshare:v0.6.0
Port: 8080/TCP
Host Port: 0/TCP
Args:
--admins
user1,user2
Liveness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
JUPYTERHUB_SERVICE_NAME: ngshare
JUPYTERHUB_API_TOKEN: <set to the key 'token' in secret 'ngshare-token'> Optional: false
JUPYTERHUB_API_URL: http://hub:8080/hub/api
JUPYTERHUB_BASE_URL: /
JUPYTERHUB_SERVICE_PREFIX: /services/ngshare/
JUPYTERHUB_SERVICE_URL: http://0.0.0.0:8080/
JUPYTERHUB_CLIENT_ID: service-ngshare
Mounts:
/srv/ngshare from ngshare-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rc8gc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
ngshare-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ngshare-pvc
ReadOnly: false
kube-api-access-rc8gc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 56s default-scheduler running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
Normal Provisioning 4m (x27 over 3h53m) openebs.io/local_openebs-localpv-provisioner-788878759f-9rlfl_1566260f-418c-446c-8cfc-354d8c1f99fc External provisioner is provisioning volume for claim "dskube/ngshare-pvc"
Warning ProvisioningFailed 4m (x27 over 3h53m) openebs.io/local_openebs-localpv-provisioner-788878759f-9rlfl_1566260f-418c-446c-8cfc-354d8c1f99fc failed to provision volume with StorageClass "local-storage-dir": Only support ReadWriteOnce access mode
Normal ExternalProvisioning 3m38s (x923 over 3h53m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "openebs.io/local" or manually created by system administrator
moreover, I noticed that whenever, i add pvc.storage to anything greater than 1Gi in config.yaml file then it returns the following output:
Error: UPGRADE FAILED: cannot patch "ngshare-pvc" with kind PersistentVolumeClaim: PersistentVolumeClaim "ngshare-pvc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteMany"},
Selector: nil,
Resources: core.ResourceRequirements{
Limits: nil,
- Requests: core.ResourceList{
- s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"},
- },
+ Requests: core.ResourceList{
+ s"storage": {i: resource.int64Amount{value: 10737418240}, s: "10Gi", Format: "BinarySI"},
+ },
Claims: nil,
},
VolumeName: "",
StorageClassName: &"local-storage-dir",
... // 3 identical fields
}
I solved the problem.
The problem was ngshare pvc was using RWX where as the storage class was using RWO. Setting accessModes to ReadWriteOnce in the ngshare's config file, fixed the issue.
Unable to install ngshare on z2jh 2.0.0 on bare metal (microk8s). By clicking on Services > ngshare returns 503 service unavailable. Your server appears to be down. Try restarting it from the hub.
kubectl get pod, shows ngshare pod to be in pending status.
The kubectl describe pod returns the following output:
Name: ngshare-78f46fcf5-r9h8r Namespace: dskube Priority: 0 Service Account: default Node:
Labels: app.kubernetes.io/instance=ngshare
app.kubernetes.io/name=ngshare
hub.jupyter.org/network-access-hub=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=78f46fcf5
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/ngshare-78f46fcf5
Containers:
ngshare:
Image: libretexts/ngshare:v0.6.0
Port: 8080/TCP
Host Port: 0/TCP
Args:
--admins
user1,user2
Liveness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
JUPYTERHUB_SERVICE_NAME: ngshare
JUPYTERHUB_API_TOKEN: <set to the key 'token' in secret 'ngshare-token'> Optional: false
JUPYTERHUB_API_URL: http://hub:8080/hub/api
JUPYTERHUB_BASE_URL: /
JUPYTERHUB_SERVICE_PREFIX: /services/ngshare/
JUPYTERHUB_SERVICE_URL: http://0.0.0.0:8080/
JUPYTERHUB_CLIENT_ID: service-ngshare
Mounts:
/srv/ngshare from ngshare-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rc8gc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
ngshare-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ngshare-pvc
ReadOnly: false
kube-api-access-rc8gc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
IPs:
Warning FailedScheduling 56s default-scheduler running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
ngshare's config.yaml
ngshare: hub_api_token: demo_token_9wRp0h4BLzAnC88jjBfpH0fa4QV9tZNI admins: -user1
The pvc config file is as following: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage-dir annotations: storageclass.kubernetes.io/is-default-class: "true" openebs.io/cas-type: local cas.openebs.io/config: |
The output of kubectl describe pvc: Name: hub-db-dir Namespace: dskube StorageClass: local-storage-dir Status: Bound Volume: pvc-9fe22882-f9cd-4431-9a4e-4b4f437dfc4f Labels: app=jupyterhub app.kubernetes.io/managed-by=Helm chart=jupyterhub-2.0.0 component=hub heritage=Helm release=dshelm Annotations: meta.helm.sh/release-name: dshelm meta.helm.sh/release-namespace: dskube pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: openebs.io/local volume.kubernetes.io/selected-node: datascience volume.kubernetes.io/storage-provisioner: openebs.io/local Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWO VolumeMode: Filesystem Used By: hub-788955dbdb-flhsn Events:
Name: claim-user1 Namespace: dskube StorageClass: local-storage-dir Status: Bound Volume: pvc-74c54761-7536-4e3c-8b04-bbd883b975cd Labels: app=jupyterhub chart=jupyterhub-2.0.0 component=singleuser-storage heritage=jupyterhub hub.jupyter.org/username=singhd1 release=dshelm Annotations: hub.jupyter.org/username: singhd1 pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: openebs.io/local volume.kubernetes.io/selected-node: datascience volume.kubernetes.io/storage-provisioner: openebs.io/local Finalizers: [kubernetes.io/pvc-protection] Capacity: 10Gi Access Modes: RWO VolumeMode: Filesystem Used By: jupyter-user1 Events:
Name: claim-user2 Namespace: dskube StorageClass: local-storage-dir Status: Bound Volume: pvc-275690bc-0d21-4567-8631-e61136c4ba1a Labels: app=jupyterhub chart=jupyterhub-2.0.0 component=singleuser-storage heritage=jupyterhub hub.jupyter.org/username=beaversbd release=dshelm Annotations: hub.jupyter.org/username: beaversbd pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: openebs.io/local volume.kubernetes.io/selected-node: datascience volume.kubernetes.io/storage-provisioner: openebs.io/local Finalizers: [kubernetes.io/pvc-protection] Capacity: 10Gi Access Modes: RWO VolumeMode: Filesystem Used By:
Events:
Name: ngshare-pvc Namespace: dskube StorageClass: local-storage-dir Status: Pending Volume:
Labels: app.kubernetes.io/instance=ngshare app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ngshare helm.sh/chart=ngshare-0.6.0 Annotations: meta.helm.sh/release-name: ngshare meta.helm.sh/release-namespace: dskube volume.beta.kubernetes.io/storage-provisioner: openebs.io/local volume.kubernetes.io/selected-node: datascience volume.kubernetes.io/storage-provisioner: openebs.io/local Finalizers: [kubernetes.io/pvc-protection] Capacity:
Access Modes:
VolumeMode: Filesystem Used By: ngshare-57c5b6965f-jv2cg ngshare-78f46fcf5-r9h8r Events: Type Reason Age From Message
Normal Provisioning 4m (x27 over 3h53m) openebs.io/local_openebs-localpv-provisioner-788878759f-9rlfl_1566260f-418c-446c-8cfc-354d8c1f99fc External provisioner is provisioning volume for claim "dskube/ngshare-pvc" Warning ProvisioningFailed 4m (x27 over 3h53m) openebs.io/local_openebs-localpv-provisioner-788878759f-9rlfl_1566260f-418c-446c-8cfc-354d8c1f99fc failed to provision volume with StorageClass "local-storage-dir": Only support ReadWriteOnce access mode Normal ExternalProvisioning 3m38s (x923 over 3h53m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "openebs.io/local" or manually created by system administrator
moreover, I noticed that whenever, i add pvc.storage to anything greater than 1Gi in config.yaml file then it returns the following output: Error: UPGRADE FAILED: cannot patch "ngshare-pvc" with kind PersistentVolumeClaim: PersistentVolumeClaim "ngshare-pvc" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ AccessModes: {"ReadWriteMany"}, Selector: nil, Resources: core.ResourceRequirements{ Limits: nil, - Requests: core.ResourceList{ - s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"}, - }, + Requests: core.ResourceList{ + s"storage": {i: resource.int64Amount{value: 10737418240}, s: "10Gi", Format: "BinarySI"}, + }, Claims: nil, }, VolumeName: "", StorageClassName: &"local-storage-dir", ... // 3 identical fields }
Any help would be highly appreciated.