Closed phagunbaya closed 7 years ago
iptables
[root@wyml01 Falkonry-k8-installer]# iptables-save
# Generated by iptables-save v1.4.21 on Fri Mar 3 13:23:40 2017
*nat
:PREROUTING ACCEPT [2:156]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-SAGRE6MUSU7ISKH2 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-SAGRE6MUSU7ISKH2 -s 10.160.20.150/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-SAGRE6MUSU7ISKH2 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-SAGRE6MUSU7ISKH2 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.160.20.150:6443
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-SAGRE6MUSU7ISKH2 --mask 255.255.255.255 --rsource -j KUBE-SEP-SAGRE6MUSU7ISKH2
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-SAGRE6MUSU7ISKH2
COMMIT
# Completed on Fri Mar 3 13:23:40 2017
# Generated by iptables-save v1.4.21 on Fri Mar 3 13:23:40 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8:452]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<@p dst -j ACCEPT
COMMIT
# Completed on Fri Mar 3 13:23:40 2017
I suspect you're hitting issue https://github.com/kubernetes/kubeadm/issues/196. You can verify that this is the root cause by manually editing /etc/kubernetes/manifests/kube-apiserver.yaml
on the master and changing the liveness probe:
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 443 # was 6443
scheme: HTTPS
@phagunbaya If you do try the above, I would also kill/restart kubelet for it to take effect faster. When I hit this problem myself, kubelet's exponential backoff was making it take forever to try to restart the kube-apiserver pod.
Did you try flushing your iptable rules and restart kubelet service ?
@msavlani Flushing iptable rules did not help. @pipejakob Thanks ! that resolved.
Also killing DNS pod seems to resolve this for me...
I am not entierly sure this has to do with #196, I think there is a race condition elsewhere. I've just hit this in something I'm working on at the moment, I will update if I figure out what causes it, as seem to have a way of reproducing is reliably.
I setup a single-machine Kubernetes cluster for development and faced the same problem.But modifying the port does not solve the problem
Hi @TracyBin, how do you solve this problem at last?
@jeffchanjunwei It is the problem of iptables.Please try the follow command
iptables -P FORWARD ACCEPT
If the command solve your problem,please tell me.
@TracyBin It doesn't work. kubedns-amd64:1.9 images still can not start. Errors as follows:
kubectl describe pod kubedns
<invalid> <invalid> 1 {kubelet k8sminion1} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://10.233.124.95:8081/readiness: dial tcp 10.233.124.95:8081: getsockopt: connection refused
docker logs kubedns-amd E0425 02:28:03.129272 1 reflector.go:199] pkg/dns/dns.go:148: Failed to list api.Service: Get https://10.233.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.233.0.1:443: i/o timeout E0425 02:28:03.234570 1 reflector.go:199] pkg/dns/dns.go:145: Failed to list api.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.233.0.1:443: i/o timeout
@jeffchanjunwei do you solve this problem?
@pineking yes. It is the cause of network that results into the problem.
I got the same issue,my kubedns log :
[root@k8s ~]# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
I0516 07:38:31.041503 1 dns.go:42] version: v1.6.0-alpha.0.680+3872cb93abf948-dirty
I0516 07:38:31.042564 1 server.go:107] Using https://10.254.0.1:443 for kubernetes master, kubernetes API:
I've tied a lot ,but none of them worked.
I have found the solution to my problem:
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"a55267932d501b9fbd6d73e5ded47d79b5763ce5", GitTreeState:"clean", BuildDate:"2017-04-14T13:36:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"a55267932d501b9fbd6d73e5ded47d79b5763ce5", GitTreeState:"clean", BuildDate:"2017-04-14T13:36:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
1.First,we should make sure the ip-forward enabled on the linux kernel of every node.Just execute command: sysctl net.ipv4.conf.all.forwarding = 1
2.Secondly,if your docker's version >=1.13,the default FORWARD chain policy was DROP,you should set default policy of the FORWARD chain to ACCEPT:$ sudo iptables -P FORWARD ACCEPT.
3.Then the configuration of the kube-proxy must be pass in :
--cluster-cidr=
ps: --cluster-cidr string The CIDR range of pods in the cluster. It is used to bridge traffic coming from outside of the cluster. If not provided, no off-cluster bridging will be performed. Refer to this:https://github.com/kubernetes/kubernetes/issues/36835
Closing this as fixed with v1.6
This is still here on 1.7.3 with Ubuntu 16.04. Same exact problem. Have been trying all the possible solutions from disabling apparmor, changing the ports, making sure nothing blocks it.. It still doesn't work.
I tried it on a completely fresh droplet from DigitalOcean and it's still the same. Doesn't look like a configuration problem from my side. I just ran the commands as they are in https://medium.com/@SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929
@mhsabbagh, I have the exact version as yours, 1 master, 3 nodes, the dashboard was setup on node 2 automatically when apply dashboard.yaml. and dashboard error looks like the same as others.
Using HTTP port: 8443 Using in-cluster config to connect to apiserver Using service account token for csrf signing No request provided. Skipping authorization header Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
I have been searching for an solution, but still cannot find a solution. I could telnet to 10.96.0.1 on port 443 from any of the master and nodes
Are we sure it has been fixed in v1.6?
I also have this problem in kubernetes v1.7.4, and after I restart docker, it fix.
Also hitting this on a fair frequent basis with Kubernetes 1.7 on top of Docker 1.12.6
Running iptables -P FORWARD ACCEPT
didn't resolve the issue.
@BenHall please open a new issue with relevant details.
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
The route problem can be solved by flush iptables.
Thanks @frankruizhi for the info. Worked for me!! (Used docker version >1.13)
I got the same problem when I use kubeadm to init a k8s v1.8 cluster with one master and one node.
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE etcd-redis 1/1 Running 0 6h kube-apiserver-redis 1/1 Running 0 6h kube-controller-manager-redis 1/1 Running 0 6h kube-dns-545bc4bfd4-zqv6j 2/3 CrashLoopBackOff 146 6h kube-flannel-ds-8cphc 1/1 Running 0 6h kube-flannel-ds-dqsbr 1/1 Running 7 6h kube-proxy-fjhlf 1/1 Running 0 6h kube-proxy-j5pwk 1/1 Running 0 6h kube-scheduler-redis 1/1 Running 0 6h
kubectl logs kube-dns-545bc4bfd4-zqv6j -n kube-system -c kubedns --previous=true I1015 13:25:06.436183 1 dns.go:48] version: 1.14.4-2-g5584e04 I1015 13:25:06.436763 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s I1015 13:25:06.436807 1 server.go:113] FLAG: --alsologtostderr="false" I1015 13:25:06.436818 1 server.go:113] FLAG: --config-dir="/kube-dns-config" I1015 13:25:06.436824 1 server.go:113] FLAG: --config-map="" I1015 13:25:06.436826 1 server.go:113] FLAG: --config-map-namespace="kube-system" I1015 13:25:06.436829 1 server.go:113] FLAG: --config-period="10s" I1015 13:25:06.436833 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0" I1015 13:25:06.436835 1 server.go:113] FLAG: --dns-port="10053" I1015 13:25:06.436843 1 server.go:113] FLAG: --domain="cluster.local." I1015 13:25:06.436848 1 server.go:113] FLAG: --federations="" I1015 13:25:06.436851 1 server.go:113] FLAG: --healthz-port="8081" I1015 13:25:06.436854 1 server.go:113] FLAG: --initial-sync-timeout="1m0s" I1015 13:25:06.436856 1 server.go:113] FLAG: --kube-master-url="" I1015 13:25:06.436861 1 server.go:113] FLAG: --kubecfg-file="" I1015 13:25:06.436864 1 server.go:113] FLAG: --log-backtrace-at=":0" I1015 13:25:06.436868 1 server.go:113] FLAG: --log-dir="" I1015 13:25:06.436874 1 server.go:113] FLAG: --log-flush-frequency="5s" I1015 13:25:06.436876 1 server.go:113] FLAG: --logtostderr="true" I1015 13:25:06.436886 1 server.go:113] FLAG: --nameservers="" I1015 13:25:06.436888 1 server.go:113] FLAG: --stderrthreshold="2" I1015 13:25:06.436891 1 server.go:113] FLAG: --v="2" I1015 13:25:06.436893 1 server.go:113] FLAG: --version="false" I1015 13:25:06.436898 1 server.go:113] FLAG: --vmodule="" I1015 13:25:06.436994 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053) I1015 13:25:06.437258 1 server.go:198] Skydns metrics enabled (/metrics:10055) I1015 13:25:06.437275 1 dns.go:147] Starting endpointsController I1015 13:25:06.437284 1 dns.go:150] Starting serviceController I1015 13:25:06.437361 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I1015 13:25:06.437368 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I1015 13:25:06.937453 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:07.437470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:07.937473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:08.437480 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:08.937473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:09.437483 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:09.937481 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:10.437460 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:10.937489 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:11.437451 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:11.937473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:12.437473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:12.937484 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:13.437496 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:13.937474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:14.437485 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:14.937470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:15.437470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:15.937467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:16.437474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:16.937469 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:17.437484 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:17.937492 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:18.437474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:18.937499 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:19.437459 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:19.937495 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:20.437475 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:20.937483 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:21.437476 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:21.937459 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:22.437477 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:22.937462 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:23.437508 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:23.937452 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:24.437490 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:24.937443 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:25.437472 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:25.937494 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:26.437496 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:26.937482 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:27.437452 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:27.937494 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:28.437492 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:28.937486 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:29.437470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:29.937467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:30.437456 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:30.937489 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:31.437469 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:31.937458 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:32.437459 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:32.937459 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:33.437468 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:33.937467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:34.437467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:34.937496 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:35.437477 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:35.937481 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:36.437505 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... E1015 13:25:36.437852 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1015 13:25:36.437865 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1015 13:25:36.937466 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:37.437493 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:37.937476 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:38.437478 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:38.937487 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:39.437473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:39.937470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:40.437487 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:40.937459 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:41.437504 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:41.937481 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:42.437470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:42.937469 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:43.437474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:43.937452 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:44.437449 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:44.937487 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:45.437460 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:45.937484 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:46.437500 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:46.937444 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:47.437488 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:47.937478 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:48.437479 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:48.937471 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:49.437485 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:49.937470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:50.437476 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:50.937475 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:51.437490 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:51.937446 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:52.437486 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:52.937470 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:53.437465 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:53.937482 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:54.437456 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:54.937469 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:55.437467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:55.937489 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:56.437471 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:56.937464 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:57.437451 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:57.937504 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:58.437482 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:58.937474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:59.437479 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:59.937460 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:00.437505 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:00.937463 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:01.437480 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:01.937465 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:02.437468 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:02.937452 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:03.437474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:03.937474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:04.437474 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:04.937467 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:05.437473 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:26:05.937472 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
kubectl logs kube-dns-545bc4bfd4-zqv6j -n kube-system -c dnsmasq --previous=true I1015 13:25:06.494999 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I1015 13:25:06.495117 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] I1015 13:25:06.503314 1 nanny.go:111] I1015 13:25:06.503330 1 nanny.go:108] dnsmasq[14]: started, version 2.78-security-prerelease cachesize 1000 W1015 13:25:06.503339 1 nanny.go:112] Got EOF from stdout I1015 13:25:06.503343 1 nanny.go:108] dnsmasq[14]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I1015 13:25:06.503348 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1015 13:25:06.503352 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1015 13:25:06.503355 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain cluster.local I1015 13:25:06.503358 1 nanny.go:108] dnsmasq[14]: reading /etc/resolv.conf I1015 13:25:06.503361 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I1015 13:25:06.503364 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I1015 13:25:06.503402 1 nanny.go:108] dnsmasq[14]: using nameserver 127.0.0.1#10053 for domain cluster.local I1015 13:25:06.503405 1 nanny.go:108] dnsmasq[14]: using nameserver 100.100.2.138#53 I1015 13:25:06.503409 1 nanny.go:108] dnsmasq[14]: using nameserver 100.100.2.136#53 I1015 13:25:06.503412 1 nanny.go:108] dnsmasq[14]: read /etc/hosts - 7 addresses
-k
--cache-size=1000
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Sun, 15 Oct 2017 22:59:16 +0800
Finished: Sun, 15 Oct 2017 23:01:26 +0800
Ready: False
Restart Count: 93
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4vgnh (ro)
sidecar:
Container ID: docker://ea065ef8d50b4a0e870782d2dcaa8b38ccefe395677c331944c9760d8f432663
Image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
Image ID: docker://sha256:fed89e8b4248a788655d528d96fe644aff012879c782784cd486ff6894ef89f6
Port: 10054/TCP
Args:
--v=2
--logtostderr
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
State: Running
Started: Sun, 15 Oct 2017 15:56:51 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Normal Killing 57m (x82 over 7h) kubelet, docker1 Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated. Warning Unhealthy 22m (x418 over 7h) kubelet, docker1 Readiness probe failed: Get http://10.96.1.2:8081/readiness: dial tcp 10.96.1.2:8081: getsockopt: connection refused Warning FailedSync 12m (x2708 over 7h) kubelet, docker1 Error syncing pod Warning BackOff 2m (x1634 over 7h) kubelet, docker1 Back-off restarting failed container
@WanChengHu Which version and network plugin do you deploy with? Do you use kubeadm?
You could try following method: Modify /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, configure Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false". Then reload the daemon,and restart docker and kubelet service.
@TracyBin flannel 0.9.0, yes I use kubeadm @frankruizhi let me try
@frankruizhi doesn't work
@WanChengHu Does pod bind with ip? Most of reason is iptables and flannel network.
systemctl disable firewalld;systemctl stop firewalld;iptables -P FORWARD ACCEPT
I have exactly the same problem - had it on K8s v1.8.0 and today I upgrade to v1.8.1 - still the same.
OS: CoreOS 1465.8.0 (with included Docker 1.12.6) K8s: v1.8.1 Etcd: 3.0.17 Flannel: 0.7.1
@TracyBin my k8s cluster is installed in Aliyun, and they tell me that they don't support k8s 1.8 the firewalld is disabled, iptables allow all inboud and outboud traffic
@WanChengHu 加我QQ看一下 641555100
I am also having this problem. Spent over 2 hours on gitter trying to resolve it with someone. No luck.
@dl00 and others, if you do think you've found an issue, please open a new issue with actual details from your environment. Please don't comment on closed ones.
Running iptables -P FORWARD ACCEPT
on master and nodes solved the problem for me.
I was running flannel.
Which pod network is preferred/works out of the box? I'm running into these same issues, but I have no clue how to fix them. I picked kube-router btw, but running into these same issues.
Robert, I don't know how mature kube-router is, have you tried Weave Net?
On Sun, 4 Feb 2018, 1:40 pm Robert te Kaat, notifications@github.com wrote:
Which pod network is preferred/works out of the box? I'm running into these same issues, but I have no clue how to fix them. I picked kube-router btw, but running into these same issues.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubeadm/issues/193#issuecomment-362907638, or mute the thread https://github.com/notifications/unsubscribe-auth/AAPWS9zoKgV8dj6-lL_dyKmmfxlrw4Mqks5tRbNWgaJpZM4MStGG .
I set k8s cluster using virtualbox, 1-kube-master, 2-kube-workers.
When google, there are lots of similar issue, although many ticket shows closed, I tried a lot, but no luck. I tried "$sudo iptables -P FORWARD ACCEPT", "$ sudo iptables --flush", this doesn't work for me.
The root cause should be in kube-dns, flannel and kube-proxy, anyone can tell exactly what is wrong in them ? :-)
kube-dns has 3 components/container: kubedns, dnsmasq,sidecar
` kube-system kube-dns-598d7bf7d4-dzbn8 2/3 CrashLoopBackOff 43 10h kube-system kube-dns-598d7bf7d4-v99tk 2/3 CrashLoopBackOff 45 10h kube-system kube-flannel-ds-mvrt5 1/1 Running 8 20h kube-system kube-flannel-ds-vt2w6 1/1 Running 5 20h kube-system kube-flannel-ds-xrsq8 1/1 Running 5 20h kube-system kube-proxy-jrw6f 1/1 Running 5 21h kube-system kube-proxy-mt6mz 1/1 Running 8 21h kube-system kube-proxy-wwd95 1/1 Running 5 21h
`
try use kubectl exec to check each container
(1) kubedns = always down with error in log.
` Waiting for services and endpoints to be initialized from apiserver...
reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
`
(2) dnsmasq = ok, but it seems the default /etc/resolv.conf might have issue, why it uses my HOST machine's DNS setting? should it use "nameserver 10.96.0.10" ?
` dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain ip6.arpa dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain cluster.local dnsmasq[12]: reading /etc/resolv.conf dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain ip6.arpa dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa dnsmasq[12]: using nameserver 127.0.0.1#10053 for domain cluster.local dnsmasq[12]: using nameserver 10.158.54.11#53 dnsmasq[12]: using nameserver 10.158.54.12#53 dnsmasq[12]: using nameserver 10.158.57.11#53 dnsmasq[12]: read /etc/hosts - 7 addresses
/ # cat /etc/resolv.conf
nameserver 10.158.54.11
nameserver 10.158.54.12
nameserver 10.158.57.11
search nokia.com china.nsn-net.net
/ # netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN
tcp 0 0 :::10053 :::* LISTEN
tcp 0 0 :::10054 :::* LISTEN
tcp 0 0 :::10055 :::* LISTEN
tcp 0 0 :::53 :::* LISTEN
udp 0 0 0.0.0.0:53 0.0.0.0:*
udp 0 0 0.0.0.0:14494 0.0.0.0:*
udp 0 0 0.0.0.0:42680 0.0.0.0:*
udp 0 0 0.0.0.0:61748 0.0.0.0:*
udp 0 0 :::10053 :::*
udp 0 0 :::53 :::*
`
(3)sidecar = ok, with failure on dnsProbe, this seems NOT a big issue.
` dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:45259->127.0.0.1:53: read: connection refused
server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:45932->127.0.0.1:53: read: connection refused
~ $ netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN
tcp 0 0 :::10054 :::* LISTEN
tcp 0 0 :::53 :::* LISTEN
udp 0 0 0.0.0.0:39749 0.0.0.0:*
udp 0 0 0.0.0.0:45937 0.0.0.0:*
udp 0 0 0.0.0.0:44462 0.0.0.0:*
udp 0 0 0.0.0.0:18938 0.0.0.0:*
udp 0 0 0.0.0.0:53 0.0.0.0:*
udp 0 0 0.0.0.0:20040 0.0.0.0:*
udp 0 0 :::53 :::*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
`
10.96.0.1:443 is the cluster ip of kubernetes service, this service is in "default" namespace, can kube-dns from namesapce "kube-system" able to access this in namespace "default" ? I suspect here might have problem ?
` $ kubectl describe service kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1 #-service-cidr 10.96.0.0/12
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.56.101:6443
Session Affinity: ClientIP
Events: <none>
`
@xiangpengzhao We had an issue where it was a timing related bug with IPTables. Our solution was to upgrade to the latest CNI plugin (in our case Weave).
Same problem here with K8S 1.10.5 and weave 2.3.0.
The problem is solved temporarily thanks to lastboy1228 (https://github.com/kubernetes/kubeadm/issues/193#issuecomment-330060848)
@pineking yes. It is the cause of network that results into the problem.
Hi, How did you solve the problem? I encounter the same issue too.
@pineking yes. It is the cause of network that results into the problem.
Hi, How did you solve the problem? I encounter the same issue too.
kubectl delete svc kubernetes
For flannel network add-on to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
The route problem can be solved by flush iptables.
You may need to execute the below command to ensure that the default policy is ACCEPT, to avoiding you are kicked out of your machine when using ssh.
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
And then you can safely flush your rules:
iptables -F
if you are using rancher you can go to kubernetes>infrastructre stacks search for kubernetes pod and restart it
@WanChengHu 加我QQ看一下 641555100
这个问题解决了吗?我也遇到这个问题.TKS!
go/tools/cache/reflector.go:94: Failed to list v1.Service: Get https://10.9.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.9.0.1:443: i/o timeout E1015 13:25:36.437865 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.9.0.1:443: i/o timeout I1015 13:25:36.937466 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... I1015 13:25:37.437493 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserv
kubedns logs:
kube-apiserver logs