openshift-metal3 / dev-scripts

Scripts to automate development/test setup for openshift integration with https://github.com/metal3-io/
Apache License 2.0
93 stars 185 forks source link

Both masters and workers moves to "Not ready" state after ~20 hours of running a cluster (Virt) #499

Closed gklein closed 5 years ago

gklein commented 5 years ago

Describe the bug Both masters and workers moves to "Not ready" state after ~20 hours of running a cluster (Virt)

To Reproduce Install a cluster (Virt) with 3 master and 2 workers 1 extra disk per machine, wait for ~20 hours

Expected/observed behavior

$ git rev-parse HEAD
312491eac86f2e4acdbec3e86feeb871e9724e80
$ oc get clusterversion
NAME      VERSION                        AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.ci-2019-04-17-133604   True        False         24h     Cluster version is 4.0.0-0.ci-2019-04-17-133604
$ oc get nodes
NAME       STATUS     ROLES    AGE   VERSION
master-0   NotReady   master   24h   v1.13.4+d4ce02c1d
master-1   NotReady   master   24h   v1.13.4+d4ce02c1d
master-2   NotReady   master   24h   v1.13.4+d4ce02c1d
worker-0   NotReady   worker   23h   v1.13.4+d4ce02c1d
worker-1   NotReady   worker   23h   v1.13.4+d4ce02c1d
worker-2   NotReady   worker   23h   v1.13.4+d4ce02c1d
$ oc get csr -A
No resources found.
$ oc get clusteroperators
NAME                                 VERSION                        AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                       4.0.0-0.ci-2019-04-17-133604   True        True          False     24h
cloud-credential                     4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
cluster-autoscaler                   4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
console                              4.0.0-0.ci-2019-04-17-133604   True        True          True      24h
dns                                  4.0.0-0.ci-2019-04-17-133604   False       False         False     120m
image-registry                       4.0.0-0.ci-2019-04-17-133604   False       True          False     169m
ingress                              4.0.0-0.ci-2019-04-17-133604   False       True          False     120m
kube-apiserver                       4.0.0-0.ci-2019-04-17-133604   True        True          False     24h
kube-controller-manager              4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
kube-scheduler                       4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
machine-api                          4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
machine-config                       4.0.0-0.ci-2019-04-17-133604   False       False         True      5h9m
marketplace                          4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
monitoring                           4.0.0-0.ci-2019-04-17-133604   False       True          True      5h27m
network                              4.0.0-0.ci-2019-04-17-133604   True        True          False     24h
node-tuning                          4.0.0-0.ci-2019-04-17-133604   False       False         False     120m
openshift-apiserver                  4.0.0-0.ci-2019-04-17-133604   False       False         False     120m
openshift-controller-manager         4.0.0-0.ci-2019-04-17-133604   False       False         False     117m
openshift-samples                    4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
operator-lifecycle-manager           4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
operator-lifecycle-manager-catalog   4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
service-ca                           4.0.0-0.ci-2019-04-17-133604   True        True          False     24h
service-catalog-apiserver            4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
service-catalog-controller-manager   4.0.0-0.ci-2019-04-17-133604   True        False         False     24h
storage                              4.0.0-0.ci-2019-04-17-133604   True        False         False     24h

Logs from one of the masters include:

May 02 06:45:11 master-2 hyperkube[1288]: E0502 06:45:11.840309    1288 kubelet_node_status.go:380] Error updating node status, will retry: error getting node "master-2": Unauthorized
May 02 06:45:11 master-2 hyperkube[1288]: E0502 06:45:11.840691    1288 kubelet_node_status.go:367] Unable to update node status: update node status exceeds retry count
May 02 04:08:24 master-2 hyperkube[1288]: I0502 04:08:24.105153    1288 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
May 02 07:05:13 master-2 hyperkube[1288]: E0502 07:05:13.457815    1288 certificate_manager.go:384] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized
# systemctl --no-pager -l status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-05-01 08:58:22 UTC; 22h ago
  Process: 1286 ExecStartPre=/bin/rm -f /var/lib/kubelet/cpu_manager_state (code=exited, status=0/SUCCESS)
  Process: 1284 ExecStartPre=/bin/mkdir --parents /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
 Main PID: 1288 (hyperkube)
    Tasks: 25 (limit: 26213)
   Memory: 201.3M
      CPU: 2h 59min 55.866s
   CGroup: /system.slice/kubelet.service
           └─1288 /usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --rotate-certificates --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --allow-privileged --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_version=4.1,node.openshift.io/os_id=rhcos --minimum-container-ttl-duration=6m0s --client-ca-file=/etc/kubernetes/ca.crt --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --anonymous-auth=false --register-with-taints=node-role.kubernetes.io/master=:NoSchedule

May 02 06:58:35 master-2 hyperkube[1288]: E0502 06:58:35.253069    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Unauthorized
May 02 06:58:36 master-2 hyperkube[1288]: E0502 06:58:36.054093    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 02 06:58:36 master-2 hyperkube[1288]: E0502 06:58:36.252001    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Unauthorized
May 02 06:58:36 master-2 hyperkube[1288]: E0502 06:58:36.651216    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Unauthorized
May 02 06:58:37 master-2 hyperkube[1288]: E0502 06:58:37.056398    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 02 06:58:37 master-2 hyperkube[1288]: E0502 06:58:37.253800    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Unauthorized
May 02 06:58:37 master-2 hyperkube[1288]: E0502 06:58:37.652523    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Unauthorized
May 02 06:58:38 master-2 hyperkube[1288]: E0502 06:58:38.058442    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 02 06:58:38 master-2 hyperkube[1288]: E0502 06:58:38.257304    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Unauthorized
May 02 06:58:38 master-2 hyperkube[1288]: E0502 06:58:38.654058    1288 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Unauthorized
$ sudo systemctl status fix_certs.timer
● fix_certs.timer - /home/kni/dev-scripts/./fix_certs.sh
   Loaded: loaded (/run/systemd/system/fix_certs.timer; static; vendor preset: disabled)
  Drop-In: /run/systemd/system/fix_certs.timer.d
           └─50-Description.conf, 50-OnActiveSec.conf, 50-OnUnitActiveSec.conf
   Active: active (waiting) since Wed 2019-05-01 12:05:37 IDT; 22h ago

May 01 12:05:37 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
$ sudo journalctl -u fix_certs.timer
-- Logs begin at Wed 2019-04-24 14:53:40 IDT, end at Thu 2019-05-02 10:49:52 IDT. --
Apr 24 16:35:12 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 28 09:21:31 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
Apr 28 09:40:26 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 29 10:55:57 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
Apr 29 11:50:41 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 29 14:04:23 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
Apr 29 14:23:13 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 30 09:29:02 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
Apr 30 10:02:00 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 30 14:16:28 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
Apr 30 15:09:40 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
Apr 30 16:21:39 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
May 01 10:39:24 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
May 01 11:39:48 titan96.lab.eng.tlv2.redhat.com systemd[1]: Stopped /home/kni/dev-scripts/./fix_certs.sh.
May 01 12:05:37 titan96.lab.eng.tlv2.redhat.com systemd[1]: Started /home/kni/dev-scripts/./fix_certs.sh.
chunfuwen commented 5 years ago

This is my cluster status after running 3 days: ''' [chwen@dell-pem630-02 dev-scripts]$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 3d8h v1.13.4+d4ce02c1d master-1 Ready master 3d8h v1.13.4+d4ce02c1d master-2 Ready master 3d8h v1.13.4+d4ce02c1d worker-0 Ready worker 3d7h v1.13.4+d4ce02c1d [chwen@dell-pem630-02 dev-scripts]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.ci-2019-04-17-133604 True False 3d7h Cluster version is 4.0.0-0.ci-2019-04-17-133604 [chwen@dell-pem630-02 dev-scripts]$ git rev-parse HEAD 42c5d2c5ab59b224284591fedef4d767e66f7b11 [chwen@dell-pem630-02 dev-scripts]$ '''

gklein commented 5 years ago
# tail /var/log/pods/f6d7dfce-6bef-11e9-962d-8e7be8ca41be/service-serving-cert-signer-controller/0.log
2019-05-01T09:03:44.402467180+00:00 stderr F I0501 09:03:44.402387       1 option.go:47] ServiceServingCertController: handling add openshift-kube-controller-manager/kube-co
ntroller-manager
2019-05-01T09:03:44.403563838+00:00 stderr F I0501 09:03:44.403437       1 secret_creating_controller.go:124] generating new cert for openshift-kube-controller-manager/kube-
controller-manager
2019-05-01T09:03:44.786892411+00:00 stderr F I0501 09:03:44.786808       1 option.go:47] ServiceServingCertUpdateController: handling add openshift-kube-controller-manager/s
erving-cert
2019-05-01T09:03:44.798663197+00:00 stderr F I0501 09:03:44.797637       1 option.go:55] ServiceServingCertController: handling update openshift-kube-controller-manager/kube
-controller-manager
2019-05-01T09:03:47.671165392+00:00 stderr F I0501 09:03:47.671072       1 option.go:47] ServiceServingCertUpdateController: handling add openshift-kube-scheduler/serving-ce
rt-2
2019-05-01T09:03:56.669922353+00:00 stderr F I0501 09:03:56.669147       1 option.go:47] ServiceServingCertUpdateController: handling add openshift-kube-scheduler/serving-ce
rt-3
2019-05-01T09:04:02.789258907+00:00 stderr F I0501 09:04:02.788728       1 option.go:47] ServiceServingCertUpdateController: handling add openshift-kube-controller-manager/s
erving-cert-1
2019-05-01T09:04:04.273433651+00:00 stderr F I0501 09:04:04.273353       1 option.go:47] ServiceServingCertUpdateController: handling add openshift-kube-scheduler/serving-ce
rt-4
2019-05-01T09:04:49.073812935+00:00 stderr F I0501 09:04:49.071769       1 option.go:47] ServiceServingCertController: handling add openshift-operator-lifecycle-manager/olm-
operators
2019-05-01T09:05:14.988553805+00:00 stderr F I0501 09:05:14.987335       1 option.go:47] ServiceServingCertController: handling add openshift-operator-lifecycle-manager/v1-p
ackages-operators-coreos-com
2019-05-01T09:05:15.406175430+00:00 stderr F I0501 09:05:15.406129       1 option.go:47] ServiceServingCertController: handling add openshift-operator-lifecycle-manager/v1-p
ackages-operators-coreos-com
2019-05-01T09:05:46.093478154+00:00 stderr F I0501 09:05:46.092299       1 leaderelection.go:249] failed to renew lease openshift-service-ca/openshift-service-serving-cert-s
igner-serving-ca-lock: failed to tryAcquireOrRenew context deadline exceeded
2019-05-01T09:05:46.093617452+00:00 stderr F F0501 09:05:46.093468       1 leaderelection.go:65] leaderelection lost
russellb commented 5 years ago

We’ve now rebased to RCs of 4.1.0. Please update if you still see this

gklein commented 5 years ago

I wasn't able to reproduce this issue more than once. I'll reopen in case it appears again.