Closed Rahul91 closed 8 years ago
@floreks thanks for your help. I will post this on kubeadm channel.
Yes, this would I alsko hope to know: "Has anyone successfully installed kubernetes-dashboard on kubeadm?" I am wasting more then week looking into kubernetes documentation on kubernetes site but there is no explanation and real how-to to do that. Looks like guys are having some issues with know-how and know-how-to-write-how-to.
kubectl -n kube-system get service kubernetes-dashboard
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.105.94.132
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy Getting response: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"kubernetes-dashboard\"", "reason": "ServiceUnavailable", "code": 503 } Anybody idea?
"Has anyone successfully installed kubernetes-dashboard on kubeadm?"
I believe, that thousands of people did.
The whole setup is explained. You should read our wiki pages like @floreks recommended. If you have any specific questions ask them here or in a new issue.
Well, somehow you have found your way to this repository then I can only assume that it is too difficult to read Getting started section from our main README, execute 2 commands and open 1 link to access Dashboard.
And yes, it works on kubeadm as we are (dev team) using it...
PS. Instead of going through issues and blaming us you could have just create issue, describe in-detail your problem and wait for help.
See related issue. https://github.com/kubernetes/dashboard/issues/1303
sysctl net.ipv4.ip_forward=1 https://kubernetes.io/docs/concepts/cluster-administration/networking/
Actually, what worked for me was to run this command on the nodes: sudo iptables -P FORWARD ACCEPT
The problem was that packets where not leaving nodes, so none of the pods that were running on the nodes (and not the master) had any connectivity.
Found the solution in this related post: https://github.com/kubernetes/kubernetes/issues/45022
To make this change persistent, add this line to /etc/sysctl.conf (I'm using Ubuntu 16.04): net.ipv4.ip_forward=1
Then, if you run "sudo iptables-save", you should see ip forwarding enabled: *filter :FORWARD ACCEPT [4:1088]
"Has anyone successfully installed kubernetes-dashboard on kubeadm?"
I believe, that thousands of people did.
I'm willing to bet thousands more have not
i'm the thousands and one that not getting it work, even follow all @floreks did.
still getting the below messages, been trying it out 48hours.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Getting the following:
http://MASTERIP:9999/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy Getting response: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service "kubernetes-dashboard"", "reason": "ServiceUnavailable", "code": 503 }
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "830b32c4-d9fe-11e8-b980-025000000001",
"resourceVersion": "1220904",
"creationTimestamp": "2018-10-27T15:39:37Z",
"labels": {
"k8s-app": "kubernetes-dashboard"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 31019
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.101.12.65",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "localhost"
}
]
}
}
}
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
My problem is solved
Because docker didn't pull successfully k8s.gcr.io/kubernetes-dashboard-amd64
Check with docker images
to make sure there is k8s.gcr.io/kubernetes-dashboard-amd64
Inspection records:
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
etcd-docker-for-desktop 1/1 Running 0 30d
kube-apiserver-docker-for-desktop 1/1 Running 0 30d
kube-controller-manager-docker-for-desktop 1/1 Running 2 30d
kube-dns-86f4d74b45-p2xmk 3/3 Running 0 30d
kube-proxy-mbfbb 1/1 Running 0 30d
kube-scheduler-docker-for-desktop 1/1 Running 0 30d
kubernetes-dashboard-7b9c7bc8c9-pkhqk 0/1 ImagePullBackOff 0 1h
or
kubernetes-dashboard-7b9c7bc8c9-pkhqk 0/1 ErrImagePull 0 1h
kubectl describe pod kubernetes-dashboard-7b9c7bc8c9-pkhqk --namespace=kube-system
Warning Failed 18m (x4 over 21m) kubelet, docker-for-desktop Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal Pulling 19m (x4 over 21m) kubelet, docker-for-desktop pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0"
Warning Failed 18m (x4 over 21m) kubelet, docker-for-desktop Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Same here.
I found that one of my nodes was unhealthy. Scaled down to kill the node and scaled up to create a new one.
This is not a dashboard issue. First check you cluster networking and dns: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
There is a simple busybox container that allows you to do basic networking/dns check. Run the container and execute
kubectl exec -ti busybox -- nslookup kubernetes.default
. If this fails to resolve then you have a cluster issue and any application that would need to connect to other app in the cluster through services will fail.For this kind of issues create one in core repository. They are more experienced in this area.
@floreks I have still this error:
kubectl logs kubernetes-dashboard-5f7b999d65-pgdgr -n kube-system
2019/03/31 16:30:47 Starting overwatch
2019/03/31 16:30:47 Using in-cluster config to connect to apiserver
2019/03/31 16:30:47 Using service account token for csrf signing
2019/03/31 16:31:17 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
But I am able to nslookup kubernetes.default from an ubuntu pod:
root@my-shell-75b487f578-hhkkc:/# nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
Dashboard not working... and isn't issue of firewall , the check using curl work fine
kube-master 1.14.1 kube-slave1 1.14.1 kube-slave2 1.14.1
Every 1,0s: kubectl -n kube-system get all -o wide Sun Apr 28 16:51:28 2019
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/coredns-fb8b8dccf-cnlz4 1/1 Running 0 19m 172.16.0.13 k8s-n1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kube-dns ClusterIP 10.96.0.10
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 19m kube-flannel quay.io/coreos/flannel:v0.11.0-amd64 app=flannel,tier=node
daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 19m kube-flannel quay.io/coreos/flannel:v0.11.0-arm app=flannel,tier=node
daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 19m kube-flannel quay.io/coreos/flannel:v0.11.0-arm64 app=flannel,tier=node
daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 19m kube-flannel quay.io/coreos/flannel:v0.11.0-ppc64le app=flannel,tier=node
daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 19m kube-flannel quay.io/coreos/flannel:v0.11.0-s390x app=flannel,tier=node
daemonset.apps/kube-proxy 3 3 3 3 3
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/coredns 2/2 2 2 19m coredns k8s.gcr.io/coredns:1.3.1 k8s-app=kube-dns deployment.apps/kubernetes-dashboard 1/1 1 1 4m2s kubernetes-dashboard k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 k8s-app=kubernetes-dashboard
[root@k8s-n1 dashboard]# kubectl logs pod/kubernetes-dashboard-5d5958d7b5-jhx2d -n kube-system 2019/04/28 14:49:42 Starting overwatch 2019/04/28 14:49:42 Using apiserver-host location: https://10.96.0.1:443 2019/04/28 14:49:42 Skipping in-cluster config 2019/04/28 14:49:42 Using random key for csrf signing 2019/04/28 14:50:12 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
when start pod running fine till 1m passed after time out. CrashLoopBackOff
[root@k8s-n1 dashboard]# ssh root@k8s-n3 Last login: Sun Apr 28 16:49:47 2019 from k8s-n1
[root@k8s-n3 ~]# curl https://10.96.0.1:443/version -k { "major": "1", "minor": "14", "gitVersion": "v1.14.1", "gitCommit": "b7394102d6ef778017f2ca4046abbaa23b88c290", "gitTreeState": "clean", "buildDate": "2019-04-08T17:02:58Z", "goVersion": "go1.12.1", "compiler": "gc", "platform": "linux/amd64"
After trying out every fix I found, what finally granted me access to the dashbord was this URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:https/proxy/
Notice this part: /https:kubernetes-dashboard:https/
Without adding the https it didn't work for me and I always got "no endpoints available for service "kubernetes-dashboard"".
Found the working link in the readme here: https://github.com/helm/charts/tree/master/stable/kubernetes-dashboard
@Rahul91 As your master is located on external server try to provide
apiserver-host
parameter to the dashboard. It is commented inside the yaml file. Without providing it, dashboard tries to discover master node locally.@arhided how are you running your cluster? Locally? AWS/GCE?
@floreks, Thanks a ton for your suggestion. I was stuck with the same issue with no endpoint available for dashboard. My pod was not up as my apiserver-host which is my master is located on external ip. After i provided this parameter in recommended.yaml, my dashboard pod and its endpoint is up on pod IP.
Cheers, Sriram
In case this helps someone (after being incredibly frustrated trying to get this working)... Thanks to all those who commented above!
(I was getting an error similar to the OP, with no endpoints available for the service when accessing the URL, and the logging showing: Error: 'dial tcp 10.100.22.2:9090: i/o timeout' Trying to reach: 'http://10.100.22.2:9090/')
Raspbian Buster, 3x raspberry pi 4 cluster. Wasn't able to access dashboard by following the instructions - dashboard pod not running on the master, using flannel, setup mostly following the guide here: teamserverless/k8s-on-raspbian Guide (with some badly formatted notes on my fork here )
This worked for me to get dashboard working after running
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
per the instructions...
On each node, edit /etc/sysctl.d/99-sysctl.conf
sudo nano /etc/sysctl.d/99-sysctl.conf
uncomment the line
net.ipv4.ip_forward=1
add the lines
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
reboot
run kubectl proxy
on the master
on the master (gui desktop), use your browser to navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
That takes you to the token login page.... :)
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'
This won't work with recommended config. You can see the login page, but you won't be able to log in. Please remove this part, because people will try it and then create an issue as login will not work. This is not a bug though.
Awesome - thanks (edited and removed that bit)
Issue details
Unable to access dashboard on http://master_ip/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Message "no endpoints available for service "kubernetes-dashboard""
I did the steps given in http://kubernetes.io/docs/user-guide/ui-access/, but still no result.
When using v0.19.3, I was able to access the dashboard.
Observed result
Unable to access UI