Open itsmetommy opened 5 years ago
I had this same issue. I was able to get around it by using the CoreDNS config in the original Kubernetes The Hard Way:
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
I was then unable to get the correct response from nslookup
on the busybox
pod though:
% kubectl exec -ti busybox -- nslookup kubernetes
Server: 10.32.0.10
Address 1: 10.32.0.10
nslookup: can't resolve 'kubernetes'
command terminated with exit code 1
I'm still investigating that issue.
It looks my issue was due to two route tables getting created in my VPC and associating the subnet and tagging the inactive route table. Once I moved the association and added the kubernetes
tag to the correct route table, everything works as expected.
I had the exact same issue as @alexclarkofficial ...not sure how two different route tables got created, since in https://github.com/prabhatsharma/kubernetes-the-hard-way-aws/blob/master/docs/03-compute-resources.md it looks like only one is specified to be created. Maybe I accidentally ran the command twice with slightly different params or something.
Anyway, all good now!
kubectl exec -it busybox -- nslookup kubernetes
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
I'm having a problem getting DNS to work. It looks like the apiserver times out, but I can't figure out why. Any help is appreciated.