cilium / cilium-etcd-operator

Operator to manage Cilium's etcd cluster
Apache License 2.0
26 stars 15 forks source link

cilium-etcd-operator may take a long time to deploy etcd pods after bootstrap #21

Closed joestringer closed 5 years ago

joestringer commented 5 years ago

Seems like when cilium-etcd-operator decides to freshly bootstrap the cluster, it may take several minutes before etcd-operator starts doing anything. Around 21:25 in the logs below, cilium-etcd-operator decided to restart the etcd cluster, then the first log from etcd-operator is at 21:28, which is around the time that etcd pods began appearing in the cluster.

cilium-etcd-operator logs:

time="2018-12-21T21:25:36Z" level=info msg="No running etcd pod found. Bootstrapping from scratch..."
time="2018-12-21T21:25:36Z" level=info msg="Deploying Cilium etcd cluster CR..."
time="2018-12-21T21:25:36Z" level=info msg=Done
time="2018-12-21T21:25:36Z" level=info msg="Sleeping for 5m0s to allow cluster to come up..."

etcd-operator logs:

time="2018-12-21T21:12:21Z" level=info msg="Finish reconciling" cluster-name=cilium-etcd pkg=cluster
E1221 21:28:11.122672       1 streamwatcher.go:109] Unable to decode an event from the watch stream: read tcp 10.10.1.56:49026->10.96.0.1:443: read: connection timed out
time="2018-12-21T21:28:11Z" level=error msg="fail to poll pods: failed to list running pods: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Detcd%2Cetcd_cluster%3Dcilium-etcd: read tcp 10.10.1.56:49026->10.96.0.1:443: read
: connection timed out" cluster-name=cilium-etcd pkg=cluster
E1221 21:28:11.122667       1 leaderelection.go:234] error retrieving resource lock kube-system/etcd-operator: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/etcd-operator: read tcp 10.10.1.56:49026->10.96.0.1:443: read: connection time
d out
time="2018-12-21T21:28:11Z" level=info msg="cluster is deleted by user" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="creating cluster with Spec:" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="{" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    \"size\": 3," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    \"repository\": \"quay.io/coreos/etcd\"," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    \"version\": \"3.3.10\"," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    \"pod\": {" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        \"labels\": {" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="            \"io.cilium/app\": \"etcd-operator\"" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        }," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        \"resources\": {}," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        \"busyboxImage\": \"docker.io/library/busybox:1.28.0-glibc\"" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    }," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    \"TLS\": {" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        \"static\": {" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="            \"member\": {" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="                \"peerSecret\": \"cilium-etcd-peer-tls\"," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="                \"serverSecret\": \"cilium-etcd-server-tls\"" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="            }," cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="            \"operatorSecret\": \"cilium-etcd-client-tls\"" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="        }" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="    }" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="}" cluster-name=cilium-etcd pkg=cluster
time="2018-12-21T21:28:11Z" level=info msg="cluster created with seed member (cilium-etcd-x8f4r6plr7)" cluster-name=cilium-etcd pkg=cluster
...

Observed by manually watching the Cilium Tests upgrade and downgrade from a Cilium stable image to master ginkgo test run locally, while watching everything with one terminal running watch kubectl get nodes,svc,pods --all-namespaces -o wide and another terminal digging around in logs to see what different components are doing.

tgraf commented 5 years ago

Do you have access to the describe pod information for the etcd-operator pod. Maybe the pod was not scheduled because of memory pressure. Can you see when the etcd-operator actually started? I noticed some delay at times but never 5 minutes.

aanm commented 5 years ago

@joestringer the E1221 21:28:11.122672 1 streamwatcher.go:109] Unable to decode an event from the watch stream: read tcp 10.10.1.56:49026->10.96.0.1:443: read: connection timed out is worrisome as it means the connectivity between etcd-operator and kube-apiserver was terminated. This means there was no connectivity between 21:12:21 and 21:28:11 to kube-apiserver which is why etcd-operator didn't start.