Open GhostPratt opened 2 years ago
I'm having the same problem here.
kube-system/coredns-8494f9c688-rjvmg:coredns
plugin/kubernetes: Get "https://10.32.0.1:443/version?timeout=32s": dial tcp 10.32.0.1:443: i/o timeout
I didnt understand about Kube API server CIDR routing logic (10.32.0.0/24 in this case).
In my case, I have on-primeses environment. I dont have networking firewall rules and all hosts can talk to each other. All worker nodes have local linux routing for the POD network (10.200.0.0/24).
@GhostPratt @danilo-lopes Did you ever wind up figuring this out?
@mguttsait I dont have this cluster anymore, I have to setup it again..
I have two PCs(not VMs), one as controller node, one as worker node, both get IP address from the DHCP server on the home ADSL router. Tried kube-dns and coredns. In both cases the dns pods are trying to dial the service clusterIP(10.32.0.1:443) of the api server and getting i/o timeout issue and never succeeded. Still digging for solutions
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects:
apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations:
apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.32.0.10 ports:
but change the versions into the file (one place)
Running kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml pulls and applies the configuration successfully.
The coredns pods though are failing to come up at all. Falling into a CrashLoopBackOff error and restarting for time eternal. Pulling back the logs I'm getting: plugin/kubernetes: Get "https://10.32.0.1:443/version?timeout=32s": dial tcp 10.32.0.1:443: i/o timeout plugin/kubernetes: Get "https://10.32.0.1:443/version?timeout=32s": dial tcp 10.32.0.1:443: i/o timeout
Othertimes have been no logs whatsoever. Any assistance would be greatly appreciated.