Open brokedba opened 2 weeks ago
I have everything working by exposing the dashboard like this:
# Expose k8s dashboard
k delete -n kubernetes-dashboard service/kubernetes-dashboard
k expose -n kubernetes-dashboard deployment/kubernetes-dashboard --type="LoadBalancer" --port 443 --target-port 8443
Oh and I also have MetalLB installed to expose external IP automatically
@lprimak I'll give it a shot but my kubernetes install is on a vm not a managed cluster within a cloud platform. I always thought LoadbBalancer meant using cloud controller manager based resource.
I find the kube dashboard documentation regarding access a bit confusing and not that sufficient.
Yes. Same setup here. VM on premises
Here is my MetalLB config for your reference: https://github.com/lprimak/infra/blob/main/scripts/cloud/oci/k8s/metallb-config.yaml
I just tried . and the URL still doesn't work .
Not localhost. You have to see external IP for dash board and connect your browser there. No 8443
there is no external IP for now , it's still pending
ok I understand what you meant , I just don't know how metallb once configured can alow me to access kube dashboard endpoint from my virtualbox host as I only use guest/host port forwarding for now. Even if I have the external IP , it's not going to be accessible from the host.
my --pod-network-cidr is 192.168.0.0/16 btw
Yes. MetalLB needs to be installed and functioning to get an external IP. External IP can be accessed from your local network because your VM host is on the network somehow or you will need to make it accessible from the VirtualBox side
Thank you @lprimak for the context. I'll install MetalLB, but I also hoped I could fix the basic proxy or port forwarding issue I shared.
After I checked the metallb doc and created the addresspool and l2advertisement . I updated the kube dash service as you suggested which now has an external IP. But It still not accessible although I can ping the IP from my host.
[root@localhost ~]# kgs -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.108.36.241 <none> 8000/TCP 2d19h
kubernetes-dashboard LoadBalancer 10.102.120.234 192.168.56.100 443:32702/TCP 45m
192.168.56.100 :8443 --- times out
$ ping 192.168.56.100
PING 192.168.56.100 (192.168.56.100) 56(84) bytes of data.
64 bytes from 192.168.56.100: icmp_seq=1 ttl=255 time=0.445 ms
--- Telnet
telnet 192.168.56.100 8443
Trying 192.168.56.100...
telnet: Unable to connect to remote host: Resource temporarily unavailable
kubectl describe svc kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: metallb.universe.tf/ip-allocated-from-pool: service-pool
Selector: k8s-app=kubernetes-dashboard
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.120.234
IPs: 10.102.120.234
LoadBalancer Ingress: 192.168.56.100
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32702/TCP
Endpoints: 192.168.102.151:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Bottom is that it didn't work !! Kubernetes dash is one of those tools that are nice on the paper but not that simple to use . K9s does the job, and I would switch to lense at worse If I need UI.
if you close issues automatically at least add a line or two explaining why. This is not helpful for the users at all.
Not port 8443. Regular https port (443). And looks like you closed that issue yourself (probably by accident :)
ouch , My apologies . I have been in projects where issues were closed before we got to the bottom of it. the reason behind the port switch is because I have port-forwarding on vagrant too as shared in the OP
... # NOTE: This will enable public access to the opened port
config.vm.network "forwarded_port", guest: 8443, host: 8443, id: 'kdashboard_console_https' <----
config.vm.network "forwarded_port", guest: 8001, host: 8081, id: 'kdashboard_console_http'
Then you need to use the same port in your vagrant config as in your kubectl expose --port
command. If they don't match it won't work
I did the other way around
config.vm.network "forwarded_port", guest: 443, host: 8443, id: 'kdashboard_console_https' <----
and kept the loadbalancer with the same port and endpoint but it's still times out.
[root@localhost ~]# kgs -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.102.120.234 192.168.56.100 443:32702/TCP 45m
192.168.56.100:8443 --- times out
Actually, I don't think you can forward your port that way. External IP needs to be routable on your local network, then you can access it from any machine on your local network
Also there is NodePort type of service available and I think you can port forward via vagrant with that
It is routable since I can ping it from my host.
the question is why is the forward port 8443 not working
But you are still forwarding ports. You need to access the external IP directly. NodePort can probably be used for your vagrant forwards but I don't use that in my setup
I can try
after that I think it'll run out of options. docker does a better job at exposing its containers.
I suggest to abandon your idea of port forwarding, just use external IP like it's intended (i.e. Directly) without any port forwarding.
I'll try without forwarding but I don't think External IP socket will work . will let you know
What happened?
0
Hi I have kubernetes provisioned in my vagrant build using virtualbox.
See git repo
I have set the port forwarding in the vagrantfile as shown below
Here is the kubernetes setup successfully deployed during provisioning
==== Install the kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Endpoints:
pods:
Service:
I tried both proxy and port-forwarding and it didn't work:
1. Proxy : I tried with different ports (8001/443)
HTTP URL:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
http://localhost:8090/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
test from the Vbox host:
Behavior : No Access ERR_CONNECTION_RESET
2. Port forwarding:
Listen on port 8443 on all addresses, forwarding to 443 in the pod
test from the Vbox host:
https://localhost:8443 Behavior : No Access ERR_CONNECTION_RESET
I'm I missing something ?
What did you expect to happen?
access Kubernetes console
How can we reproduce it (as minimally and precisely as possible)?
clone repo : https://github.com/brokedba/Devops_all_in_one_vm/tree/main/OL7
cd Devops_all_in_one_vm/OL7
vagrant up
Anything else we need to know?
No response
What browsers are you seeing the problem on?
No response
Kubernetes Dashboard version
v2.7.0
Kubernetes version
v.1.28
Dev environment
No response