Open abrarshivani opened 7 years ago
We are waiting for team's response on the strategy as well as kubernetes folk to confirm inclusion of cloud provider changes.
I am moving this to M1. Upstream PR is already out for review, so we are not waiting for this one.
As per @divyenpatel 's comment, on vmware/kubernetes#133, kubernetes#45201 has fixed this problem by making vSphere cloud provider return same internal and external ip. The fix has been cherry picked for kubernetes 1.6 only. We need to try out this fix for 1.6.
1.5.* kubernetes versions don't have the fix for the IP. So api.clustername.skydns.local won't be accessible for this deployments.
This issue was moved to kubernetes/kops#2744
Currently, we are not able to connect to Kubernetes cluster externally which is launched by Kops on vSphere. This is because dns name used by Kops to connect the cluster externally is mapped to ip address of docker interface.
Dns controller in Kops updates dns record in CoreDNS. It watches all the Kubernetes nodes and pods for updating the records. It sets api.clustername to external ip reported in node addresses of master and api.internal.clustername to ip of the apiserver pod. Since the Kubelet is not running in standalone mode it assigns internal ip address reported by vSphere CloudProvider to the apiserver pod. (Note: apiserver pod is using host network). vSphere CloudProvider should map internal and external ip address appropriately to resolve this issue.
Without this fix we need to run e2e tests and kubectl commands by logging into master node. With this fix we can do it externally.
Created issue for vSphere CloudProvider here: https://github.com/vmware/kubernetes/issues/133