Open sonyafenge opened 4 years ago
@Hong-Chang I'm able to deploy coredns from root environment after the above commit. However @sonyafenge seems to be hitting issues. I'll keep this issue open - can you please investigate this?
Thanks
Assigning to Hong-Chang for further investigation if this is something specific to Sonya's setup. I've had no issues deploying from root account.
root@fw0000359:~/VARK# ./cluster/kubectl.sh get po -n kube-system -o wide
NAME HASHKEY READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6b9cf74864-cxs5w 2865346723032974159 1/1 Running 0 136m 10.244.0.2 arktos-master <none> <none>
coredns-6b9cf74864-wmdl8 188961164902717807 1/1 Running 0 136m 10.244.0.3 arktos-master <none> <none>
etcd-arktos-master 7560597975935384117 1/1 Running 0 135m 10.245.1.9 arktos-master <none> <none>
kube-apiserver-arktos-master 6854790288953405362 1/1 Running 0 135m 10.245.1.9 arktos-master <none> <none>
kube-controller-manager-arktos-master 8616494171255771005 1/1 Running 0 135m 10.245.1.9 arktos-master <none> <none>
kube-flannel-ds-amd64-8lld9 274934553196256743 1/1 Running 0 136m 10.245.1.124 ip-10-245-1-124 <none> <none>
kube-flannel-ds-amd64-cgvp5 9009903684335923824 1/1 Running 0 136m 10.245.1.178 ip-10-245-1-178 <none> <none>
kube-flannel-ds-amd64-f4l44 4404326152526426238 1/1 Running 0 136m 10.245.1.29 ip-10-245-1-29 <none> <none>
kube-flannel-ds-amd64-fq4c4 6503479497548150307 1/1 Running 0 132m 10.245.1.59 ip-10-245-1-59 <none> <none>
kube-flannel-ds-amd64-jr2tf 1772573833947929466 1/1 Running 0 136m 10.245.1.9 arktos-master <none> <none>
kube-flannel-ds-amd64-s8db2 2733138535560429348 1/1 Running 0 136m 10.245.1.207 ip-10-245-1-207 <none> <none>
kube-proxy-6vmqx 4213153926646826862 1/1 Running 0 132m 10.245.1.59 ip-10-245-1-59 <none> <none>
kube-proxy-7qjzt 201354609747942369 1/1 Running 0 136m 10.245.1.29 ip-10-245-1-29 <none> <none>
kube-proxy-d7c8b 672087166253384014 1/1 Running 0 136m 10.245.1.178 ip-10-245-1-178 <none> <none>
kube-proxy-hvfk4 3466868413094614373 1/1 Running 0 136m 10.245.1.9 arktos-master <none> <none>
kube-proxy-kd57j 6662184434312622413 1/1 Running 0 136m 10.245.1.124 ip-10-245-1-124 <none> <none>
kube-proxy-z4tlz 7080281636753842534 1/1 Running 0 136m 10.245.1.207 ip-10-245-1-207 <none> <none>
kube-scheduler-arktos-master 6534717491455958475 1/1 Running 0 136m 10.245.1.9 arktos-master <none> <none>
workload-controller-manager-arktos-master 760903382302989469 1/1 Running 0 136m 10.245.1.9 arktos-master <none> <none>
root@fw0000359:~/VARK#
What happened: Vinay suspect it is broken by some change we made to networking that makes node join depend on CNI - not the case with upstream, but I'm guessing at this point). The same code starts with upstream 1.15 starts coreDNS just fine. What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
cluster failed to initialize per coredns keeping restarting
Anything else we need to know?: https://github.com/futurewei-cloud/arktos/pull/127 Environment:
kubectl version
):cat /etc/os-release
):uname -a
):