kubernetes / dashboard

General-purpose web UI for Kubernetes clusters
Apache License 2.0
14.07k stars 4.11k forks source link

Can't access Kubernetes Dashboard from outside the cluster (VirtualBox host ) #9160

Open brokedba opened 2 weeks ago

brokedba commented 2 weeks ago

What happened?

0

Hi I have kubernetes provisioned in my vagrant build using virtualbox.

See git repo

I have set the port forwarding in the vagrantfile as shown below

  ...  # NOTE: This will enable public access to the opened port
  config.vm.network "forwarded_port", guest: 4443, host: 8444, id: 'awx_https'
  config.vm.network "forwarded_port", guest: 8090, host: 8090, id: 'awx_http'
  config.vm.network "forwarded_port", guest: 8443, host: 8443, id: 'kdashboard_console_https'
  config.vm.network "forwarded_port", guest: 8001, host: 8081, id: 'kdashboard_console_http'

Here is the kubernetes setup successfully deployed during provisioning

==== Install the kubernetes Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Endpoints:

# kubectl -n kubernetes-dashboard get endpoints -o wide
NAME                        ENDPOINTS              AGE
dashboard-metrics-scraper   192.168.102.131:8000   16h
kubernetes-dashboard        192.168.102.134:8443   16h

pods:

[root@localhost ~]# kubectl -n kubernetes-dashboard get pods -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE                    NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-5657497c4c-t5zz4   1/1     Running   0          16h   192.168.102.131   localhost.localdomain   <none>           <none>
kubernetes-dashboard-78f87ddfc-nbjwc         1/1     Running   0          16h   192.168.102.134   localhost.localdomain   <none>           <none>

Service:

[root@localhost ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.108.36.241   <none>        8000/TCP   17h
kubernetes-dashboard        ClusterIP   10.96.123.89    <none>        443/TCP    17h

1. Proxy : I tried with different ports (8001/443)

kubectl proxy Starting to serve on 127.0.0.1:8001

HTTP URL: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

http://localhost:8090/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

test from the Vbox host:

Behavior : No Access ERR_CONNECTION_RESET

2. Port forwarding:

Listen on port 8443 on all addresses, forwarding to 443 in the pod

kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8443:443 
--address="0.0.0.0" &

test from the Vbox host:

https://localhost:8443 Behavior : No Access ERR_CONNECTION_RESET

I'm I missing something ?

What did you expect to happen?

access Kubernetes console

How can we reproduce it (as minimally and precisely as possible)?

clone repo : https://github.com/brokedba/Devops_all_in_one_vm/tree/main/OL7 cd Devops_all_in_one_vm/OL7 vagrant up

Anything else we need to know?

No response

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

v2.7.0

Kubernetes version

v.1.28

Dev environment

No response

lprimak commented 2 weeks ago

I have everything working by exposing the dashboard like this:

# Expose k8s dashboard
k delete -n kubernetes-dashboard service/kubernetes-dashboard 
k expose -n kubernetes-dashboard deployment/kubernetes-dashboard --type="LoadBalancer" --port 443 --target-port 8443
lprimak commented 2 weeks ago

Oh and I also have MetalLB installed to expose external IP automatically

brokedba commented 2 weeks ago

@lprimak I'll give it a shot but my kubernetes install is on a vm not a managed cluster within a cloud platform. I always thought LoadbBalancer meant using cloud controller manager based resource.

I find the kube dashboard documentation regarding access a bit confusing and not that sufficient.

lprimak commented 2 weeks ago

Yes. Same setup here. VM on premises

lprimak commented 2 weeks ago

Here is my MetalLB config for your reference: https://github.com/lprimak/infra/blob/main/scripts/cloud/oci/k8s/metallb-config.yaml

brokedba commented 2 weeks ago

I just tried . and the URL still doesn't work .

localhost:8443

lprimak commented 2 weeks ago

Not localhost. You have to see external IP for dash board and connect your browser there. No 8443

brokedba commented 2 weeks ago

there is no external IP for now , it's still pending image

brokedba commented 2 weeks ago

ok I understand what you meant , I just don't know how metallb once configured can alow me to access kube dashboard endpoint from my virtualbox host as I only use guest/host port forwarding for now. Even if I have the external IP , it's not going to be accessible from the host.

my --pod-network-cidr is 192.168.0.0/16 btw

lprimak commented 2 weeks ago

Yes. MetalLB needs to be installed and functioning to get an external IP. External IP can be accessed from your local network because your VM host is on the network somehow or you will need to make it accessible from the VirtualBox side

brokedba commented 2 weeks ago

Thank you @lprimak for the context. I'll install MetalLB, but I also hoped I could fix the basic proxy or port forwarding issue I shared.

brokedba commented 2 weeks ago

After I checked the metallb doc and created the addresspool and l2advertisement . I updated the kube dash service as you suggested which now has an external IP. But It still not accessible although I can ping the IP from my host.

[root@localhost ~]# kgs -n kubernetes-dashboard
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP      10.108.36.241    <none>           8000/TCP        2d19h
kubernetes-dashboard        LoadBalancer   10.102.120.234   192.168.56.100   443:32702/TCP   45m

192.168.56.100 :8443 --- times out

$ ping 192.168.56.100
PING 192.168.56.100 (192.168.56.100) 56(84) bytes of data.
64 bytes from 192.168.56.100: icmp_seq=1 ttl=255 time=0.445 ms

--- Telnet
 telnet 192.168.56.100 8443
Trying 192.168.56.100...
telnet: Unable to connect to remote host: Resource temporarily unavailable

 kubectl describe svc kubernetes-dashboard -n kubernetes-dashboard
Name:                     kubernetes-dashboard
Namespace:                kubernetes-dashboard
Labels:                   k8s-app=kubernetes-dashboard
Annotations:              metallb.universe.tf/ip-allocated-from-pool: service-pool
Selector:                 k8s-app=kubernetes-dashboard
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.120.234
IPs:                      10.102.120.234
LoadBalancer Ingress:     192.168.56.100
Port:                     <unset>  443/TCP
TargetPort:               8443/TCP
NodePort:                 <unset>  32702/TCP
Endpoints:                192.168.102.151:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
brokedba commented 2 weeks ago

Bottom is that it didn't work !! Kubernetes dash is one of those tools that are nice on the paper but not that simple to use . K9s does the job, and I would switch to lense at worse If I need UI.

brokedba commented 2 weeks ago

if you close issues automatically at least add a line or two explaining why. This is not helpful for the users at all.

lprimak commented 2 weeks ago

Not port 8443. Regular https port (443). And looks like you closed that issue yourself (probably by accident :)

brokedba commented 2 weeks ago

ouch , My apologies . I have been in projects where issues were closed before we got to the bottom of it. the reason behind the port switch is because I have port-forwarding on vagrant too as shared in the OP

...  # NOTE: This will enable public access to the opened port 
  config.vm.network "forwarded_port", guest: 8443, host: 8443, id: 'kdashboard_console_https' <---- 
  config.vm.network "forwarded_port", guest: 8001, host: 8081, id: 'kdashboard_console_http' 
lprimak commented 2 weeks ago

Then you need to use the same port in your vagrant config as in your kubectl expose --port command. If they don't match it won't work

brokedba commented 2 weeks ago

I did the other way around

  config.vm.network "forwarded_port", guest: 443, host: 8443, id: 'kdashboard_console_https' <---- 

and kept the loadbalancer with the same port and endpoint but it's still times out.

[root@localhost ~]# kgs -n kubernetes-dashboard
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE 
kubernetes-dashboard        LoadBalancer   10.102.120.234   192.168.56.100   443:32702/TCP   45m

192.168.56.100:8443 --- times out

lprimak commented 2 weeks ago

Actually, I don't think you can forward your port that way. External IP needs to be routable on your local network, then you can access it from any machine on your local network

lprimak commented 2 weeks ago

Also there is NodePort type of service available and I think you can port forward via vagrant with that

brokedba commented 2 weeks ago

It is routable since I can ping it from my host. image the question is why is the forward port 8443 not working

lprimak commented 2 weeks ago

But you are still forwarding ports. You need to access the external IP directly. NodePort can probably be used for your vagrant forwards but I don't use that in my setup

brokedba commented 2 weeks ago

I can try

  1. using 443 as port on host side in the vagrant forwarding rule
  2. use nodeport

after that I think it'll run out of options. docker does a better job at exposing its containers.

lprimak commented 2 weeks ago

I suggest to abandon your idea of port forwarding, just use external IP like it's intended (i.e. Directly) without any port forwarding.

brokedba commented 1 week ago

I'll try without forwarding but I don't think External IP socket will work . will let you know