contiv / netplugin

Container networking for various use cases
Apache License 2.0
513 stars 177 forks source link

If we're deleting kube-dns, let's do it properly and remove dangling kube-dns k8s resources #1118

Closed vhosakot closed 6 years ago

vhosakot commented 6 years ago

kube-dns is pod-less, endpoint-less, backend-less, useless and dangling in contiv/netplugin's vagrant dev setup. This is a bit confusing and makes the user think that contiv uses kube-dns when it does not.

[vagrant@k8master ~]$ kubectl describe service kube-dns -n=kube-system  | grep -i end
Endpoints:         <none>
Endpoints:         <none>

[vagrant@k8master ~]$ kubectl get services -n=kube-system | grep -i dns
kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   10h

[vagrant@k8master ~]$ kubectl get serviceaccounts -n=kube-system | grep -i dns
kube-dns                             1         10h

[vagrant@k8master ~]$ kubectl get clusterrolebindings -n=kube-system | grep -i dns
system:kube-dns                                        10h

[vagrant@k8master ~]$ kubectl get endpoints -n=kube-system | grep -i dns
kube-dns                  <none>              10h

[vagrant@k8master ~]$ kubectl get secrets -n=kube-system | grep -i dns
kube-dns-token-8nqvc                             kubernetes.io/service-account-token   3         10h

Contiv uses its own custom DNS in k8s:

https://github.com/contiv/netplugin/blob/master/docs/dns.md

The k8s dev confirmed that if we're using external/custom DNS in k8s and deleting the kube-dns deployment, we need to delete all the remaining dangling, pod-less kube-dns k8s resources (service, serviceaccount, clusterrolebinding and endpoint) too, so that they do not conflict with the contiv's external/custom DNS in any way.

Signed-off-by: Vikram Hosakote vhosakot@cisco.com

vhosakot commented 6 years ago

build PR

vhosakot commented 6 years ago

Green gate with no dangling kube-dns!

vhosakot commented 6 years ago

Having dangling, pod-less resources is just bad design lol. The k8s dev confirmed this.

The dangling kube-dns service's port 53 conflicts with the NodePort 53 in the host network. Meenakshi saw this issue too:

[vagrant@k8master ~]$ kubectl describe service kube-dns -n=kube-system | grep 'IP:\|Target'
IP:                10.96.0.10
TargetPort:        53/UDP
TargetPort:        53/TCP

[vagrant@k8master ~]$ kubectl run -i --tty busybox --image=busybox
/ # cat /etc/resolv.conf | grep nameserver
nameserver 10.96.0.10

/ # nslookup localhost
Server:    10.96.0.10
Address 1: 10.96.0.10
Name:      localhost
Address 1: ::1 localhost
Address 2: 127.0.0.1 localhost

README.md already updated in this PR too at https://github.com/contiv/netplugin/blob/master/install/k8s/README.md#using-contiv.