kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.4k stars 4.16k forks source link

503 ServiceUnavailable #1316

Closed kking124 closed 8 years ago

kking124 commented 8 years ago

Issue details

I am new to kubernetes and I am trying to set up a basic 2 machine cluster.

The kubernetes-dashboard pod status is listed as pending with 0/1 ready, which is likely the underlying issue. I have no clue where to begin solving this problem. The ultimate result is that I receive 503 ServiceUnavailable in the browser.

Please do not:

  1. recommend using a hosted solution.
  2. recommend minikube.
  3. recommend reading https://github.com/kubernetes/dashboard/issues/971 - I have already been through it, and my problem persists.
    Environment
Dashboard version: 

kubernetes-dashboard-amd64:v1.4.0 per https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

Kubernetes version: 

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:10:32Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Operating system: CentOS 7
Steps to reproduce
  1. Followed getting started at: http://kubernetes.io/docs/getting-started-guides/kubeadm/
    a) Created 2 Centos 7.0 VMs on XenServer 7.0 (kube-node-0 [master] and kube-node-1)
  2. installed xauth, xhost, firefox on kube-node-0
  3. ran kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml on kube-node-0
  4. ran kubectl proxy & on kube-node-0
  5. ran firefox & on kube-node-0
  6. navigated to https://localhost/ui
  7. navigated to http://localhost:8001/ui
    a) 301 redirected to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernertes-dashboard
    Observed result

http response message to https://localhost/ui

401 Unauthorized

http response message to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernertes-dashboard

{
  "kind": "Status",
  "apiVersion": "V1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavalable",
  "code": 503
}
Expected result

Dashboard UI

Comments

Below is the output of various commands that showed up in https://github.com/kubernetes/dashboard/issues/971

$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
kube-dns is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

dump file at: https://gist.github.com/kking124/85efed41c107cd84224204435d512632

$ kubectl get nodes
NAME          STATUS     AGE
kube-node-0   Ready      1h
kube-node-1   Ready      1h
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   etcd-kube-node-0                        1/1       Running   0          1h
kube-system   kube-apiserver-kube-node-0              1/1       Running   0          1h
kube-system   kube-controller-manager-kube-node-0     1/1       Running   0          1h
kube-system   kube-discovery-982812725-273x4          1/1       Running   0          1h
kube-system   kube-dns-2247936740-qymse               2/3       Running   1          1h
kube-system   kube-proxy-amd64-8o5rr                  1/1       Running   0          1h
kube-system   kube-proxy-amd64-mg0ic                  1/1       Running   0          1h
kube-system   kube-scheduler-kube-node-0              1/1       Running   0          1h
kube-system   kubernetes-dashboard-1655269645-ekxxs   0/1       Pending   0          52m
kube-system   weave-net-9i5ce                         2/2       Running   0          59m
kube-system   weave-net-y9iu2                         2/2       Running   0          59m
$ kubectl get pods
$ kubectl describe service kubernetes-dashboard --namespace=kube-system
Name:                   kubernetes-dashboard
Namespace:              kube-system
Labels:                 app=kubernetes-dashboard
Selector:               app=kubernetes-dashboard
Type:                   NodePort
IP:                     100.73.132.23
Port:                   <unset> 80/TCP
NodePort:               <unset> 30526/TCP
Endpoints:              <none>
Session Affinity:       None
$ kubectl --namespace=kube-system get ep kubernetes-dashboard
NAME                   ENDPOINTS   AGE
kubernetes-dashboard   <none>      1h
$ kubectl get svc kubernetes-dashboard --namespace=kube-system
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   100.73.132.23   <nodes>       80/TCP    1h
$ kubectl get deployment kubernetes-dashboard --namespace=kube-system
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1         1         1            0           1h
$ kubectl logs kubernetes-dashboard-1655269645-ekxxs --namespace kube-system
kking124 commented 8 years ago

My pod status has changed - I'm now showing ERROR for the dashboard:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY     STATUS             RESTARTS   AGE
kube-system   etcd-kube-node-0                                1/1       Running            12         3h
kube-system   kube-apiserver-kube-node-0                      1/1       Running            19         3h
kube-system   kube-controller-manager-kube-node-0             1/1       Running            12         3h
kube-system   kube-discovery-982812725-273x4                  1/1       Running            0          3h
kube-system   kube-dns-2247936740-qymse                       2/3       Running            3          3h
kube-system   kube-proxy-amd64-8o5rr                          1/1       Running            0          3h
kube-system   kube-proxy-amd64-bom0t                          1/1       Running            5          26m
kube-system   kube-proxy-amd64-mg0ic                          1/1       Running            0          3h
kube-system   kube-scheduler-kube-node-0                      1/1       Running            12         3h
kube-system   kubernetes-dashboard-1655269645-ekxxs           0/1       Error              17         2h
kube-system   weave-net-7629t                                 2/2       Running            14         26m
kube-system   weave-net-9i5ce                                 2/2       Running            0          3h
kube-system   weave-net-y9iu2                                 1/2       CrashLoopBackOff   6          3h
$ kubectl logs kubernetes-dashboard-1655269645-ekxxs --namespace kube-system
Error from server: pods "kubernetes-dashboard-1655269645-ekxxs" not found
colemickens commented 8 years ago

If dashboard is running on the node where Weave is crashlooping, it seems like it could definitely contribute to or cause the problem you're seeing.

Can you post kubectl describe pod --namespace=kube-system kubernetes-dashboard-1655269645-ekxxs, I didn't see that one yet.

kking124 commented 8 years ago
$ kubectl describe pod --namespace=kube-system kubernetes-dashboard-1655269645-ekxxs
Error from server: pods "kubernetes-dashboard-1655269645-ekxxs" not found
colemickens commented 8 years ago

You should probably post kubectl get pods --namespace=kube-system again too then.

On Oct 7, 2016 12:56 PM, "kking124" notifications@github.com wrote:

$ kubectl describe pod --namespace=kube-system kubernetes-dashboard-1655269645-ekxxs Error from server: pods "kubernetes-dashboard-1655269645-ekxxs" not found

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/dashboard/issues/1316#issuecomment-252346649, or mute the thread https://github.com/notifications/unsubscribe-auth/AAT9dAs02OJRvlRoxORx1z81q72ITSuJks5qxqPygaJpZM4KRTH3 .

kking124 commented 8 years ago
$ kubectl get pods --namespace=kube-system
NAME                                    READY     STATUS             RESTARTS   AGE
etcd-kube-node-0                        1/1       Running            14         3h
kube-apiserver-kube-node-0              1/1       Running            22         3h
kube-controller-manager-kube-node-0     1/1       Running            14         3h
kube-discovery-982812725-niss5          1/1       Running            1          12m
kube-dns-2247936740-qymse               2/3       Running            20         3h
kube-proxy-amd64-8o5rr                  1/1       Running            0          3h
kube-proxy-amd64-bom0t                  1/1       Running            5          49m
kube-proxy-amd64-mg0ic                  1/1       Running            2          3h
kube-scheduler-kube-node-0              1/1       Running            14         3h
kubernetes-dashboard-1655269645-8sk3k   0/1       CrashLoopBackOff   7          21m
weave-net-7629t                         2/2       Running            14         49m
weave-net-9i5ce                         2/2       Running            4          3h
weave-net-y9iu2                         1/2       CrashLoopBackOff   11         3h
  info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
kking124 commented 8 years ago

The node lists as ready ... why would it be rejecting connections?

$ kubectl describe pod --namespace=kube-system kubernetes-dashboard-1655269645-8sk3k
Name:       kubernetes-dashboard-1655269645-8sk3k
Namespace:  kube-system
Node:       kube-node-1/192.168.42.115
Start Time: Fri, 07 Oct 2016 15:37:15 -0400
Labels:     app=kubernetes-dashboard
        pod-template-hash=1655269645
Status:     Running
IP:     10.40.0.1
Controllers:    ReplicaSet/kubernetes-dashboard-1655269645
Containers:
  kubernetes-dashboard:
    Container ID:   docker://f84b355b51e6a84c7ef75b1a3931d27257b07cd0494fac093959bf02fd59e03e
    Image:      gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
    Image ID:       docker://sha256:436faaeba2e2071e45809ae4416aa3c19cb197be1eb2ff3ce89fc6793702c63b
    Port:       9090/TCP
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 07 Oct 2016 15:57:01 -0400
      Finished:     Fri, 07 Oct 2016 15:57:26 -0400
    Ready:      False
    Restart Count:  7
    Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ay4ee (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  default-token-ay4ee:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-ay4ee
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  23m       23m     1   {default-scheduler }            Normal      Scheduled   Successfully assigned kubernetes-dashboard-1655269645-8sk3k to kube-node-1
  23m       23m     1   {kubelet kube-node-1}           Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/07048f712446450e7eb7f44363b37c65d8dd4298baa8fc175f5db8e14db6635c: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/cbf27709c6ec635c0ad8ccfbae187d4ce51d4d4950f5e174c94628f485073575: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/9b061bc3e31d174fddae07d75fba015971c718c8ef68524df0157fa97a98bd21: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/94b3c5354b618d4c9115a5d6950faa2e0495440cbc9ef9091148cac9b7baf97b: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/b0359fd4ebf85f3624e3369502a3ad59aecd45e42e644009ff32ddda9041d24e: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/ee50b2de3f75215a723bd13bd8835512f1d6ac29acea7f41c03d5dc3756e0f7f: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/c477976d977faa1e756f09345539a71d4e67f7c446df5561f74de9e760f1a453: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/25fef4abcd4fca192dcc4a59eae071518fdb1b282c958c23424c36be0a883c19: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  23m   23m 1   {kubelet kube-node-1}       Warning FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-1655269645-8sk3k_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/bacb09a62e4cae348e6e9fa217230a45f818fb4940b2153c4bb1fdb0036f9518: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  17m   17m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 944c621dc0e5; Security:[seccomp=unconfined]
  17m   17m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 944c621dc0e5
  17m   17m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id af0d73bffceb
  17m   17m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id af0d73bffceb; Security:[seccomp=unconfined]
  16m   16m 1   {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  16m   16m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 10c450129edd
  16m   16m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 10c450129edd; Security:[seccomp=unconfined]
  16m   16m 2   {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  15m   15m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 08305514ebb5
  15m   15m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 08305514ebb5; Security:[seccomp=unconfined]
  15m   14m 3   {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  14m   14m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 215f0eba4273; Security:[seccomp=unconfined]
  14m   14m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 215f0eba4273
  14m   12m 7   {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  12m   12m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 5cbed03c1ef3
  12m   12m 1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 5cbed03c1ef3; Security:[seccomp=unconfined]
  12m   9m  12  {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  9m    9m  1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id 3c6c2a8238d6; Security:[seccomp=unconfined]
  9m    9m  1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id 3c6c2a8238d6
  23m   8m  241 {kubelet kube-node-1}                       Warning FailedSync  (events with common reason combined)
  14m   5m  2   {kubelet kube-node-1}                       Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/secret/7360037c-8cc5-11e6-9c94-063b83564343-default-token-ay4ee" (spec.Name: "default-token-ay4ee") pod "7360037c-8cc5-11e6-9c94-063b83564343" (UID: "7360037c-8cc5-11e6-9c94-063b83564343") with: Get https://192.168.42.100:443/api/v1/namespaces/kube-system/secrets/default-token-ay4ee: dial tcp 192.168.42.100:443: i/o timeout
  14m   5m  12  {kubelet kube-node-1}                       Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/secret/7360037c-8cc5-11e6-9c94-063b83564343-default-token-ay4ee" (spec.Name: "default-token-ay4ee") pod "7360037c-8cc5-11e6-9c94-063b83564343" (UID: "7360037c-8cc5-11e6-9c94-063b83564343") with: Get https://192.168.42.100:443/api/v1/namespaces/kube-system/secrets/default-token-ay4ee: dial tcp 192.168.42.100:443: getsockopt: connection refused
  17m   3m  8   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Pulling     pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0"
  17m   3m  8   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Pulled      Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0"
  3m    3m  1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Created     Created container with docker id f84b355b51e6; Security:[seccomp=unconfined]
  3m    3m  1   {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Normal  Started     Started container with docker id f84b355b51e6
  8m    10s 38  {kubelet kube-node-1}                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-1655269645-8sk3k_kube-system(7360037c-8cc5-11e6-9c94-063b83564343)"

  16m   10s 69  {kubelet kube-node-1}   spec.containers{kubernetes-dashboard}   Warning BackOff Back-off restarting failed docker container
colemickens commented 8 years ago

Look at the events, it failed to start the container.

kubectl logs --namespace=kube-system kubernetes-dashboard-1655269645-8sk3k will hopefully tell us what's really going on now that we're looking at the right Pod.

kking124 commented 8 years ago

Ok, so it looks like the server is trying to do a hostname lookup. Why would it do that if the node already reported in to the master?

kubectl logs --namespace=kube-system kubernetes-dashboard-1655269645-8sk3k
Error from server: Get https://kube-node-1:10250/containerLogs/kube-system/kubernetes-dashboard-1655269645-8sk3k/kubernetes-dashboard: dial tcp: lookup kube-node-1 on 192.168.42.1:53: no such host
colemickens commented 8 years ago

The apiserver needs to talk to the kubelet on the nodes to retrieve the logs for the dashboard. For some reason your apiserver can't connect to your node's kubelet properly. (This is an error from the kubectl command, not an error from kube-dashboard, though it could be related)

kking124 commented 8 years ago

Ok, I understand that; do I need to be able to resolve the kubelet hostname to do that, though? I don't have an internal DNS server on this network.

I assumed that when the node reported in, it would provide an ip address.

Shouldn't kube-dns take care of that resolution for me?

colemickens commented 8 years ago

kube-dns is only for resolving Services deployed in Kubernetes.

I think @justinsb might actually have some context for this issue. I think one way to workaround is to change a kubelet flag to get the kubelet to report it's IP instead of its hostname to the master.

kking124 commented 8 years ago

I could also just put the info in the hosts file, too, right?

justinsb commented 8 years ago

Sounds like a kubeadm problem, not a dashboard problem. kubeadm is in alpha, but if you want to contribute towards its development you can probably work with the sig-cluster-lifecycle.

kking124 commented 8 years ago

@justinsb ... that's great?

  1. How do we determine who owns the problem?
  2. If it's not a dashboard issue, where do I report the issue?
justinsb commented 8 years ago

@kking124 kubeadm lives in https://github.com/kubernetes/kubernetes

kking124 commented 8 years ago

@justinsb That solves 2 from my last list; how do we deal with 1?

Edit: changed text to not markdown connect to un-associated issues.

justinsb commented 8 years ago

@kking124 well you tracked it down :-)

kube-system weave-net-y9iu2 1/2 CrashLoopBackOff 6 3h

I think the kubeadm folk can help you root cause from here. They're likely seen it before - they've seen a bunch of the common failure cases by now.

kking124 commented 8 years ago

@justinsb you've been super helpful.

Please never help me again. I promise I'll never help you.

justinsb commented 8 years ago

@kking124 I don't know what you expect me to do; I'm trying to help you: I've pointed to you to the places where you can find the people that can resolve this for you very quickly.

kking124 commented 8 years ago

@justinsb. Well, I expected nothing from you.

@colemickens has been nothing but nice, polite and helpful. (S)he apparently felt that you might be able to help with, what I am guessing is, an issue that was beyond their capabilities.

Their opinion, sadly, has no basis in reality.

It took an insult to get real information from you ... you might want to re-evaluate how you give people "help" ...

digitalfishpond commented 8 years ago

Hi @kking124, so, since @justinb has re-directed you to the appropriate group who are better informed and better equipped to solve your issue, and I see that you have opened the same issue over there, would it be ok to close this one as it has turned out not to be a dashboard-related issue? Many thanks in advance! :)

kking124 commented 8 years ago

I'm not yet convinced that there isn't a dashboard-related issue here. I've tried to launch the dashboard in a half-dozen different Kubernetes clusters at this point and it's never worked.

I simply don't yet know enough about the problem to know where to place blame on the crash because I am new to Kubernetes.

I can only place limited blame on you and justinb for why I can't yet find the problem, though.

Close it since you think you know better if you want; why would you want to actually provide help to someone trying to use your software after all?

PS: nice edit to remove the inappropriate comment that showed up in the original post in my email :)

cheld commented 8 years ago

I just wrote a troubleshooting guide which might help you: https://github.com/kubernetes/dashboard/pull/1324

kking124 commented 8 years ago

@cheld I'll take a look. Thank you.

vhosakot commented 7 years ago

I see this issue too with kubernetes 1.5.4 and kubernetes-dashboard image version gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0.

I installed kubeadm referring https://kubernetes.io/docs/getting-started-guides/kubeadm/, and then installed kubernetes-dashboard by doing

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.0/src/deploy/kubernetes-dashboard.yaml

I see the kubernetes-dashboard in CrashLoopBackOff status and the k8s_kubernetes-dashboard.* container on the worker is in Exited state.

Below are the errors. Has anyone successfully installed kubernetes-dashboard on kubeadm?

# kubectl --namespace=kube-system get all
NAME                                                          READY     STATUS             RESTARTS   AGE
po/calico-policy-controller-mqsmh                             1/1       Running            0          4h
po/canal-etcd-tm2rv                                           1/1       Running            0          4h
po/canal-node-3nv2t                                           3/3       Running            0          4h
po/canal-node-5fckh                                           3/3       Running            1          4h
po/canal-node-6zgq8                                           3/3       Running            0          4h
po/canal-node-rtjl8                                           3/3       Running            0          4h
po/dummy-2088944543-09w8n                                     1/1       Running            0          4h
po/etcd-vhosakot-kolla-kube1.localdomain                      1/1       Running            0          4h
po/kube-apiserver-vhosakot-kolla-kube1.localdomain            1/1       Running            2          4h
po/kube-controller-manager-vhosakot-kolla-kube1.localdomain   1/1       Running            0          4h
po/kube-discovery-1769846148-pftx5                            1/1       Running            0          4h
po/kube-dns-2924299975-9m2cp                                  4/4       Running            0          4h
po/kube-proxy-0ndsb                                           1/1       Running            0          4h
po/kube-proxy-h7qrd                                           1/1       Running            1          4h
po/kube-proxy-k6168                                           1/1       Running            0          4h
po/kube-proxy-lhn0k                                           1/1       Running            0          4h
po/kube-scheduler-vhosakot-kolla-kube1.localdomain            1/1       Running            0          4h
po/kubernetes-dashboard-3203962772-mw26t                      0/1       CrashLoopBackOff   11         41m
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
svc/canal-etcd             10.96.232.136    <none>        6666/TCP        4h
svc/kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   4h
svc/kubernetes-dashboard   10.100.254.77    <nodes>       80:30085/TCP    41m
NAME                   DESIRED   SUCCESSFUL   AGE
jobs/configure-canal   1         1            4h
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-discovery         1         1         1            1           4h
deploy/kube-dns               1         1         1            1           4h
deploy/kubernetes-dashboard   1         1         1            0           41m
NAME                                 DESIRED   CURRENT   READY     AGE
rs/calico-policy-controller          1         1         1         4h
rs/dummy-2088944543                  1         1         1         4h
rs/kube-discovery-1769846148         1         1         1         4h
rs/kube-dns-2924299975               1         1         1         4h
rs/kubernetes-dashboard-3203962772   1         1         0         41m

# kubectl --namespace=kube-system describe pod kubernetes-dashboard-3203962772-mw26t
  20m    5s    89    {kubelet vhosakot-kolla-kube2.localdomain}                        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-3203962772-mw26t_kube-system(67b0d69b-0b47-11e7-8c97-7a2ed4192438)"

# kubectl --namespace=kube-system logs kubernetes-dashboard-3203962772-mw26t
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md

# docker ps -a | grep -i dash
3c33cf43d5e4        gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0   "/dashboard --port=90"   54 seconds ago      Exited (1) 22 seconds ago                       k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4

# docker logs k8s_kubernetes-dashboard.9eb4d80e_kubernetes-dashboard-3203962772-mw26t_kube-system_67b0d69b-0b47-11e7-8c97-7a2ed4192438_93520bd4
Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
chongyaowang commented 7 years ago

@vhosakot @kking124 is this issue solved? I have similar issue as this.

floreks commented 7 years ago

It is very old issue. From what I see it was configuration issue, not related directly to Dashboard.

kevinhooke commented 7 years ago

@vhosakot same issue for me too. Setup kubernetes on CentOS 7 with kubeadm, tried to setup Dashboard following the install steps, got exactly the same issue you're seeing

kking124 commented 7 years ago

As OP I can't verify this is still an issue.

Since posting the issue, I've not done much with kube, but all my servers have been off the last 6 months because of multiple moves. I just settled into my new place last week. Give me a month to run cables and get my testing servers back up?

I'll report on whatever is the current Kube version at that time on top of cents.

Just also remember that this is what I try and run my home testing servers for work on so I am ... sensitive to crashes.

I've been running unmarshalled centos since the crash, just FYI.

On Oct 19, 2017 12:52 AM, "Kevin Hooke" notifications@github.com wrote:

@vhosakot https://github.com/vhosakot same issue for me too. Setup kubernetes on CentOS 7 with kubeadm, tried to setup Dashboard following the install steps, got exactly the same issue you're seeing

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/dashboard/issues/1316#issuecomment-337798937, or mute the thread https://github.com/notifications/unsubscribe-auth/AUSut_j4B0KI-cM4gTy3UgIkHMtbdQIhks5sttWegaJpZM4KRTH3 .

ningg commented 6 years ago

I meet the same problem.

$ kubectl get po --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running            0          2m
kube-system   kube-addon-manager-minikube             1/1       Running            4          2h
kube-system   kube-apiserver-minikube                 1/1       Running            0          2m
kube-system   kube-controller-manager-minikube        1/1       Running            0          2m
kube-system   kube-dns-86f4d74b45-x2gn8               3/3       Running            8          2h
kube-system   kube-proxy-62mwx                        1/1       Running            0          1m
kube-system   kube-scheduler-minikube                 1/1       Running            0          2m
kube-system   kubernetes-dashboard-5498ccf677-fh2rj   0/1       CrashLoopBackOff   11         2h
kube-system   storage-provisioner                     1/1       Running            4          2h

The solution is:

$ minikube stop
$ minikube delete
$ minikube start