Closed tonymmm1 closed 4 years ago
I don't see logs from the pod.
Provide logs as based on the above information there is nothing we can do other than close this.
[user1@k8s ~]$ kubectl describe deployments -n kubernetes-dashboard
Name: dashboard-metrics-scraper
Namespace: kubernetes-dashboard
CreationTimestamp: Wed, 01 Jul 2020 21:02:55 -0500
Labels: k8s-app=dashboard-metrics-scraper
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=dashboard-metrics-scraper
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=dashboard-metrics-scraper
Annotations: seccomp.security.alpha.kubernetes.io/pod: runtime/default
Service Account: kubernetes-dashboard
Containers:
dashboard-metrics-scraper:
Image: kubernetesui/metrics-scraper:v1.0.4
Port: 8000/TCP
Host Port: 0/TCP
Liveness: http-get http://:8000/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-volume (rw)
Volumes:
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: dashboard-metrics-scraper-6b4884c9d5 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 82s deployment-controller Scaled up replica set dashboard-metrics-scraper-6b4884c9d5 to 1
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
CreationTimestamp: Wed, 01 Jul 2020 21:02:55 -0500
Labels: k8s-app=kubernetes-dashboard
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=kubernetes-dashboard
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=kubernetes-dashboard
Service Account: kubernetes-dashboard
Containers:
kubernetes-dashboard:
Image: kubernetesui/dashboard:v2.0.3
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available False MinimumReplicasUnavailable
OldReplicaSets: <none>
NewReplicaSet: kubernetes-dashboard-7f99b75bf4 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 82s deployment-controller Scaled up replica set kubernetes-dashboard-7f99b75bf4 to 1
[user1@k8s ~]$ sudo journalctl -fu kubelet
-- Logs begin at Wed 2020-07-01 21:00:26 CDT. --
Jul 01 21:09:30 k8s kubelet[1004]: E0701 21:09:30.728597 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:09:40 k8s kubelet[1004]: E0701 21:09:40.745731 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:09:50 k8s kubelet[1004]: E0701 21:09:50.761985 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:00 k8s kubelet[1004]: E0701 21:10:00.779090 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:10 k8s kubelet[1004]: E0701 21:10:10.794356 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:20 k8s kubelet[1004]: E0701 21:10:20.810522 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:30 k8s kubelet[1004]: E0701 21:10:30.838005 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Jul 01 21:10:40 k8s kubelet[1004]: E0701 21:10:40.860658 1004 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
I had changed the docker daemon to systemd instead of cgroupfs because of the kubernetes initialization but I think this might be causing me some issues? And I am unable to get logs for the pod.
[user1@k8s ~]$ kubectl describe pod kubernetes-dashboard-7f99b75bf4-952wz -n kubernetes-dashboard
Name: kubernetes-dashboard-7f99b75bf4-952wz
Namespace: kubernetes-dashboard
Priority: 0
Node: k8s-1/10.0.1.160
Start Time: Wed, 01 Jul 2020 21:03:04 -0500
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=7f99b75bf4
Annotations: <none>
Status: Running
IP: 10.244.2.11
IPs:
IP: 10.244.2.11
Controlled By: ReplicaSet/kubernetes-dashboard-7f99b75bf4
Containers:
kubernetes-dashboard:
Container ID: docker://d0213fc4252f77620f0bef164e3a007fef59a534763d173de7f99038244724a6
Image: kubernetesui/dashboard:v2.0.3
Image ID: docker-pullable://kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 01 Jul 2020 21:14:04 -0500
Finished: Wed, 01 Jul 2020 21:14:05 -0500
Ready: False
Restart Count: 7
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gdptg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-gdptg:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-gdptg
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kubernetes-dashboard/kubernetes-dashboard-7f99b75bf4-952wz to k8s-1
Warning FailedCreatePodSandBox 13m kubelet, k8s-1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8735c4d219b0c71bf9b4452390ad4bd3acbdd09fcf453cb3c24b0ee90f17bd8b" network for pod "kubernetes-dashboard-7f99b75bf4-952wz": networkPlugin cni failed to set up pod "kubernetes-dashboard-7f99b75bf4-952wz_kubernetes-dashboard" network: open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 12m kubelet, k8s-1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b3f97121e5bc9cc15612d0d369f07ed1a7782174d3769b2ba660c042f62cb249" network for pod "kubernetes-dashboard-7f99b75bf4-952wz": networkPlugin cni failed to set up pod "kubernetes-dashboard-7f99b75bf4-952wz_kubernetes-dashboard" network: open /run/flannel/subnet.env: no such file or directory
Normal SandboxChanged 12m (x2 over 13m) kubelet, k8s-1 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 12m (x4 over 12m) kubelet, k8s-1 Pulling image "kubernetesui/dashboard:v2.0.3"
Normal Pulled 12m (x4 over 12m) kubelet, k8s-1 Successfully pulled image "kubernetesui/dashboard:v2.0.3"
Normal Created 12m (x4 over 12m) kubelet, k8s-1 Created container kubernetes-dashboard
Normal Started 12m (x4 over 12m) kubelet, k8s-1 Started container kubernetes-dashboard
Warning BackOff 2m56s (x50 over 12m) kubelet, k8s-1 Back-off restarting failed container
[user1@k8s ~]$ kubectl logs kubernetes-dashboard-7f99b75bf4-952wz -n kubernetes-dashboard
2020/07/02 02:19:13 Starting overwatch
2020/07/02 02:19:13 Using namespace: kubernetes-dashboard
2020/07/02 02:19:13 Using in-cluster config to connect to apiserver
2020/07/02 02:19:13 Using secret token for csrf signing
2020/07/02 02:19:13 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: no route to host
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0000a43e0)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0003a5700)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0003a5700)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
Networking issue in the cluster. Pod to pod/service to service communication does not work. Not related to Dashboard directly.
/close
@floreks: Closing this issue.
Environment
Steps to reproduce
Observed result
kubectl get pods -A -o wide
kubernetes-dashboard error CrashLoopBackOff
Expected result
The dashboard should be running as state in the documentation about the NodePort.
Comments
If there is any configuration issue or anything pertaining to Kubernetes or Centos 7.8, feedback or help would be appreciated.