kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.5k stars 4.17k forks source link

kubernetes-dashboard CrashLoopBackOff on Centos 7.8 #5322

Closed tonymmm1 closed 4 years ago

tonymmm1 commented 4 years ago
Environment
Installation method: Manual
Kubernetes version:  1.18.5
Dashboard version: 2.03
Operating system: Centos 7.8 
Kernel: 3.10.0-1127.13.1.el7.x86_64
Steps to reproduce
  1. Installing manual method for kubernetes and cli tools.
  2. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
  3. Edit recommended.yaml to allow for NodePort operation
    kind: Service
    apiVersion: v1
    metadata:
    labels:
    k8s-app: kubernetes-dashboard
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
    spec:
    type: NodePort
    ports:
    - port: 443
      targetPort: 8443
      nodePort: 31950
    selector:
    k8s-app: kubernetes-dashboard
  4. kubectl apply -f recommended.yaml
  5. kubectl get deployments -n kubernetes-dashboard
    NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
    dashboard-metrics-scraper   1/1     1            1           9m44s
    kubernetes-dashboard        0/1     1            0           9m44s
    Observed result

    kubectl get pods -A -o wide

    kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-rtbtl   1/1     Running            0          10m     10.244.1.9    k8s-2   <none>           <none>
    kubernetes-dashboard   kubernetes-dashboard-7f99b75bf4-rhmzr        0/1     CrashLoopBackOff   6          10m     10.244.2.10   k8s-1   <none>           <none>

kubernetes-dashboard error CrashLoopBackOff

Expected result

The dashboard should be running as state in the documentation about the NodePort.

Comments

If there is any configuration issue or anything pertaining to Kubernetes or Centos 7.8, feedback or help would be appreciated.

floreks commented 4 years ago

I don't see logs from the pod.

floreks commented 4 years ago

Provide logs as based on the above information there is nothing we can do other than close this.

tonymmm1 commented 4 years ago
[user1@k8s ~]$ kubectl describe deployments -n kubernetes-dashboard
Name:                   dashboard-metrics-scraper
Namespace:              kubernetes-dashboard
CreationTimestamp:      Wed, 01 Jul 2020 21:02:55 -0500
Labels:                 k8s-app=dashboard-metrics-scraper
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=dashboard-metrics-scraper
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=dashboard-metrics-scraper
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Service Account:  kubernetes-dashboard
  Containers:
   dashboard-metrics-scraper:
    Image:        kubernetesui/metrics-scraper:v1.0.4
    Port:         8000/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:8000/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp-volume (rw)
  Volumes:
   tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   dashboard-metrics-scraper-6b4884c9d5 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  82s   deployment-controller  Scaled up replica set dashboard-metrics-scraper-6b4884c9d5 to 1

Name:                   kubernetes-dashboard
Namespace:              kubernetes-dashboard
CreationTimestamp:      Wed, 01 Jul 2020 21:02:55 -0500
Labels:                 k8s-app=kubernetes-dashboard
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kubernetes-dashboard
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kubernetes-dashboard
  Service Account:  kubernetes-dashboard
  Containers:
   kubernetes-dashboard:
    Image:      kubernetesui/dashboard:v2.0.3
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
  Volumes:
   kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
   tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      False   MinimumReplicasUnavailable
OldReplicaSets:  <none>
NewReplicaSet:   kubernetes-dashboard-7f99b75bf4 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  82s   deployment-controller  Scaled up replica set kubernetes-dashboard-7f99b75bf4 to 1
tonymmm1 commented 4 years ago
[user1@k8s ~]$ sudo journalctl -fu kubelet
-- Logs begin at Wed 2020-07-01 21:00:26 CDT. --
Jul 01 21:09:30 k8s kubelet[1004]: E0701 21:09:30.728597    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:09:40 k8s kubelet[1004]: E0701 21:09:40.745731    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:09:50 k8s kubelet[1004]: E0701 21:09:50.761985    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:00 k8s kubelet[1004]: E0701 21:10:00.779090    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:10 k8s kubelet[1004]: E0701 21:10:10.794356    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:20 k8s kubelet[1004]: E0701 21:10:20.810522    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:30 k8s kubelet[1004]: E0701 21:10:30.838005    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:40 k8s kubelet[1004]: E0701 21:10:40.860658    1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"

I had changed the docker daemon to systemd instead of cgroupfs because of the kubernetes initialization but I think this might be causing me some issues? And I am unable to get logs for the pod.

tonymmm1 commented 4 years ago
[user1@k8s ~]$ kubectl describe pod kubernetes-dashboard-7f99b75bf4-952wz -n kubernetes-dashboard
Name:         kubernetes-dashboard-7f99b75bf4-952wz
Namespace:    kubernetes-dashboard
Priority:     0
Node:         k8s-1/10.0.1.160
Start Time:   Wed, 01 Jul 2020 21:03:04 -0500
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=7f99b75bf4
Annotations:  <none>
Status:       Running
IP:           10.244.2.11
IPs:
  IP:           10.244.2.11
Controlled By:  ReplicaSet/kubernetes-dashboard-7f99b75bf4
Containers:
  kubernetes-dashboard:
    Container ID:  docker://d0213fc4252f77620f0bef164e3a007fef59a534763d173de7f99038244724a6
    Image:         kubernetesui/dashboard:v2.0.3
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 01 Jul 2020 21:14:04 -0500
      Finished:     Wed, 01 Jul 2020 21:14:05 -0500
    Ready:          False
    Restart Count:  7
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gdptg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-gdptg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-gdptg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                   From               Message
  ----     ------                  ----                  ----               -------
  Normal   Scheduled               <unknown>             default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7f99b75bf4-952wz to k8s-1
  Warning  FailedCreatePodSandBox  13m                   kubelet, k8s-1     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8735c4d219b0c71bf9b4452390ad4bd3acbdd09fcf453cb3c24b0ee90f17bd8b" network for pod "kubernetes-dashboard-7f99b75bf4-952wz": networkPlugin cni failed to set up pod "kubernetes-dashboard-7f99b75bf4-952wz_kubernetes-dashboard" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  12m                   kubelet, k8s-1     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b3f97121e5bc9cc15612d0d369f07ed1a7782174d3769b2ba660c042f62cb249" network for pod "kubernetes-dashboard-7f99b75bf4-952wz": networkPlugin cni failed to set up pod "kubernetes-dashboard-7f99b75bf4-952wz_kubernetes-dashboard" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          12m (x2 over 13m)     kubelet, k8s-1     Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 12m (x4 over 12m)     kubelet, k8s-1     Pulling image "kubernetesui/dashboard:v2.0.3"
  Normal   Pulled                  12m (x4 over 12m)     kubelet, k8s-1     Successfully pulled image "kubernetesui/dashboard:v2.0.3"
  Normal   Created                 12m (x4 over 12m)     kubelet, k8s-1     Created container kubernetes-dashboard
  Normal   Started                 12m (x4 over 12m)     kubelet, k8s-1     Started container kubernetes-dashboard
  Warning  BackOff                 2m56s (x50 over 12m)  kubelet, k8s-1     Back-off restarting failed container
tonymmm1 commented 4 years ago
[user1@k8s ~]$ kubectl logs kubernetes-dashboard-7f99b75bf4-952wz -n kubernetes-dashboard
2020/07/02 02:19:13 Starting overwatch
2020/07/02 02:19:13 Using namespace: kubernetes-dashboard
2020/07/02 02:19:13 Using in-cluster config to connect to apiserver
2020/07/02 02:19:13 Using secret token for csrf signing
2020/07/02 02:19:13 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: no route to host

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0000a43e0)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0003a5700)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0003a5700)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
    /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
main.main()
    /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
floreks commented 4 years ago

Networking issue in the cluster. Pod to pod/service to service communication does not work. Not related to Dashboard directly.

/close

k8s-ci-robot commented 4 years ago

@floreks: Closing this issue.

In response to [this](https://github.com/kubernetes/dashboard/issues/5322#issuecomment-652845047): >Networking issue in th cluster. Pod to pod/service to service communication does not work. Not related to Dashboard directly. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.