Closed mnothic closed 5 years ago
Are you sure the api server was up? We have actually seen some flakiness from aks that looks like that but usually it's back pretty quickly
Are you sure the api server was up? We have actually seen some flakiness from aks that looks like that but usually it's back pretty quickly
Yes our AKS clusters are working with http routing plugins disable and custom ingress and externalDNS working perfect.
$ kubectl get pods --all-namespaces
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-56c5c48c4d-fjsps 1/1 Running 1 14d
nginx-ingress-controller-56c5c48c4d-q5qpg 1/1 Running 1 14d
coredns-754f947b4-pfvch 1/1 Running 0 47h
coredns-754f947b4-tjtsp 1/1 Running 0 47h
coredns-autoscaler-6fcdb7d64-ckp6r 1/1 Running 0 47h
external-dns-6c96464564-l5gr7 1/1 Running 0 14d
heapster-5fb7488d97-mpggq 2/2 Running 0 13d
kube-proxy-976np 1/1 Running 1 46h
kube-proxy-ffd5c 1/1 Running 0 13d
kube-proxy-k88zr 1/1 Running 0 13d
kube-proxy-nhgmg 1/1 Running 0 13d
kube-proxy-vff6f 1/1 Running 0 46h
kube-svc-redirect-49bwd 2/2 Running 0 14d
kube-svc-redirect-bpdrl 2/2 Running 0 14d
kube-svc-redirect-qsxlx 2/2 Running 0 46h
kube-svc-redirect-rpktl 2/2 Running 0 14d
kube-svc-redirect-v29zt 2/2 Running 2 46h
kubernetes-dashboard-847bb4ddc6-gnxbm 1/1 Running 0 47h
metrics-server-7b97f9cd9-cj9dt 1/1 Running 1 47h
omsagent-4sb86 0/1 CrashLoopBackOff 3000 13d
omsagent-hwgsk 0/1 CrashLoopBackOff 431 46h
omsagent-j27x9 1/1 Running 471 46h
omsagent-jh7lx 1/1 Running 2559 13d
omsagent-n65gs 1/1 Running 2553 13d
omsagent-rs-6c9ffdd68-p5zj2 0/1 CrashLoopBackOff 3261 13d
tunnelfront-ffd8dc4f8-xgnmr 1/1 Running 0 47h
NOTE: I won't worry about omsagnet that shit it's always crashing :D
having same problem with EKS
having same problem with EKS
It was the CNI I had to change from kubelet to azure and works fine now.
/close
@feiskyer: Closing this issue.
Error:
Configuration:
pod log: