kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.05k stars 39.38k forks source link

kube-controller-manager leader election triggers DetachVolume.Detach #104169

Closed henro001 closed 3 years ago

henro001 commented 3 years ago

What happened:

When kube-controller-manager process in terminated on a leader node, a new leader election starts. This will trigger DetachVolume.Detach for all persistent volumes in the cluster. In our case, we use Cinder CSI driver, the volumes are detached in cinder, but the volumeattachmens are not deleted/updated, so this requires manual intervention to clean up the statefulsets/volumeattachments.

What you expected to happen:

The volumes should not be detached.

How to reproduce it (as minimally and precisely as possible):

Terminate kube-controller-manager on the leader node.

kill -s SIGHUP $(pidof kube-controller-manager)

Anything else we need to know?:

Cinder CSI driver enabled.

Environment:

k8s-ci-robot commented 3 years ago

@henro001: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
henro001 commented 3 years ago

Logs from newly elected k-c-m leader:

I0805 15:31:30.000521 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="green-m4_3c8f3404-f0c0-4789-8610-342 c381665ac became leader" I0805 15:31:30.000559 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="green-m4_3c8f3404-f0 c0-4789-8610-342c381665ac became leader" I0805 15:31:31.061637 1 request.go:645] Throttling request took 1.04247782s, request: GET:https://127.0.0.1:6443/apis/operators.coreos.com/v1alpha1?timeout=32s W0805 15:31:32.012528 1 plugins.go:105] WARNING: openstack built-in cloud provider is now deprecated. Please use 'external' cloud provider for openstack: https://github.com/kubernetes/cloud-provider-openstack I0805 15:31:32.637490 1 shared_informer.go:240] Waiting for caches to sync for tokens I0805 15:31:32.653973 1 controllermanager.go:549] Started "cronjob" I0805 15:31:32.654015 1 cronjob_controller.go:96] Starting CronJob Manager I0805 15:31:32.694087 1 controllermanager.go:549] Started "csrcleaner" I0805 15:31:32.694333 1 cleaner.go:83] Starting CSR cleaner controller I0805 15:31:32.719857 1 controllermanager.go:549] Started "ttl" I0805 15:31:32.720014 1 ttl_controller.go:118] Starting TTL controller I0805 15:31:32.720032 1 shared_informer.go:240] Waiting for caches to sync for TTL I0805 15:31:32.735285 1 controllermanager.go:549] Started "tokencleaner" I0805 15:31:32.735383 1 tokencleaner.go:118] Starting token cleaner controller I0805 15:31:32.735398 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner I0805 15:31:32.737939 1 shared_informer.go:247] Caches are synced for tokens I0805 15:31:32.744837 1 controllermanager.go:549] Started "endpoint" I0805 15:31:32.744992 1 endpoints_controller.go:184] Starting endpoint controller I0805 15:31:32.745068 1 shared_informer.go:240] Waiting for caches to sync for endpoint I0805 15:31:32.772842 1 controllermanager.go:549] Started "namespace" I0805 15:31:32.773099 1 namespace_controller.go:200] Starting namespace controller I0805 15:31:32.773217 1 shared_informer.go:240] Waiting for caches to sync for namespace I0805 15:31:32.783918 1 controllermanager.go:549] Started "job" I0805 15:31:32.783964 1 job_controller.go:148] Starting job controller I0805 15:31:32.784110 1 shared_informer.go:240] Waiting for caches to sync for job I0805 15:31:32.796515 1 controllermanager.go:549] Started "statefulset" I0805 15:31:32.796775 1 stateful_set.go:146] Starting stateful set controller I0805 15:31:32.796788 1 shared_informer.go:240] Waiting for caches to sync for stateful set I0805 15:31:32.807770 1 controllermanager.go:549] Started "persistentvolume-expander" I0805 15:31:32.807954 1 expand_controller.go:303] Starting expand controller I0805 15:31:32.807987 1 shared_informer.go:240] Waiting for caches to sync for expand I0805 15:31:32.818392 1 controllermanager.go:549] Started "pv-protection" I0805 15:31:32.818415 1 pv_protection_controller.go:83] Starting PV protection controller I0805 15:31:32.818428 1 core.go:240] Will not configure cloud provider routes for allocate-node-cidrs: true, configure-cloud-routes: false. W0805 15:31:32.818437 1 controllermanager.go:541] Skipping "route" I0805 15:31:32.818429 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0805 15:31:32.828680 1 controllermanager.go:549] Started "clusterrole-aggregation" I0805 15:31:32.828866 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I0805 15:31:32.828880 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I0805 15:31:32.836702 1 shared_informer.go:247] Caches are synced for token_cleaner I0805 15:31:32.943004 1 controllermanager.go:549] Started "endpointslice" I0805 15:31:32.943110 1 endpointslice_controller.go:237] Starting endpoint slice controller I0805 15:31:32.943120 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I0805 15:31:33.093247 1 controllermanager.go:549] Started "deployment" I0805 15:31:33.093315 1 deployment_controller.go:153] Starting deployment controller I0805 15:31:33.093324 1 shared_informer.go:240] Waiting for caches to sync for deployment I0805 15:31:33.392578 1 controllermanager.go:549] Started "disruption" I0805 15:31:33.392741 1 disruption.go:331] Starting disruption controller I0805 15:31:33.392760 1 shared_informer.go:240] Waiting for caches to sync for disruption I0805 15:31:33.542415 1 controllermanager.go:549] Started "csrapproving" W0805 15:31:33.542547 1 controllermanager.go:541] Skipping "ephemeral-volume" I0805 15:31:33.542741 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0805 15:31:33.542783 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0805 15:31:33.693670 1 controllermanager.go:549] Started "endpointslicemirroring" I0805 15:31:33.693762 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I0805 15:31:33.693771 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I0805 15:31:33.842627 1 controllermanager.go:549] Started "podgc" I0805 15:31:33.842709 1 gc_controller.go:89] Starting GC controller I0805 15:31:33.842719 1 shared_informer.go:240] Waiting for caches to sync for GC I0805 15:31:33.992629 1 node_lifecycle_controller.go:380] Sending events to api server. I0805 15:31:33.992872 1 taint_manager.go:163] Sending events to api server. I0805 15:31:33.992959 1 node_lifecycle_controller.go:508] Controller will reconcile labels. I0805 15:31:33.992999 1 controllermanager.go:549] Started "nodelifecycle" I0805 15:31:33.993050 1 node_lifecycle_controller.go:542] Starting node controller I0805 15:31:33.993057 1 shared_informer.go:240] Waiting for caches to sync for taint E0805 15:31:34.142431 1 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers W0805 15:31:34.142565 1 controllermanager.go:541] Skipping "service" I0805 15:31:34.442309 1 controllermanager.go:549] Started "garbagecollector" I0805 15:31:34.442550 1 garbagecollector.go:128] Starting garbage collector controller I0805 15:31:34.442575 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0805 15:31:34.442613 1 graph_builder.go:282] GraphBuilder running I0805 15:31:34.593874 1 controllermanager.go:549] Started "pvc-protection" W0805 15:31:34.593912 1 controllermanager.go:541] Skipping "ttl-after-finished" I0805 15:31:34.593956 1 pvc_protection_controller.go:110] Starting PVC protection controller I0805 15:31:34.593970 1 shared_informer.go:240] Waiting for caches to sync for PVC protection I0805 15:31:34.743145 1 controllermanager.go:549] Started "attachdetach" I0805 15:31:34.743240 1 attach_detach_controller.go:322] Starting attach detach controller I0805 15:31:34.743250 1 shared_informer.go:240] Waiting for caches to sync for attach detach I0805 15:31:34.893349 1 controllermanager.go:549] Started "replicationcontroller" I0805 15:31:34.893460 1 replica_set.go:182] Starting replicationcontroller controller I0805 15:31:34.893478 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController I0805 15:31:35.491571 1 controllermanager.go:549] Started "horizontalpodautoscaling" I0805 15:31:35.491608 1 horizontal.go:169] Starting HPA controller I0805 15:31:35.491621 1 shared_informer.go:240] Waiting for caches to sync for HPA I0805 15:31:35.642196 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I0805 15:31:35.642221 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0805 15:31:35.642245 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/ssl/ca.crt::/etc/kubernetes/ssl/ca.key I0805 15:31:35.642817 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I0805 15:31:35.642835 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0805 15:31:35.642859 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/ssl/ca.crt::/etc/kubernetes/ssl/ca.key I0805 15:31:35.643440 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I0805 15:31:35.643459 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0805 15:31:35.643482 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/ssl/ca.crt::/etc/kubernetes/ssl/ca.key I0805 15:31:35.643957 1 controllermanager.go:549] Started "csrsigning" I0805 15:31:35.644134 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I0805 15:31:35.644159 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0805 15:31:35.644180 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/ssl/ca.crt::/etc/kubernetes/ssl/ca.key I0805 15:31:35.793160 1 node_lifecycle_controller.go:77] Sending events to api server I0805 15:31:35.793225 1 controllermanager.go:549] Started "cloud-node-lifecycle" I0805 15:31:37.944021 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions I0805 15:31:37.944081 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0805 15:31:37.944105 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0805 15:31:37.944185 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deletebackuprequests.velero.io I0805 15:31:37.944236 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for backupstoragelocations.velero.io I0805 15:31:37.944260 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for backups.velero.io I0805 15:31:37.944287 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for configs.config.gatekeeper.sh I0805 15:31:37.944440 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps I0805 15:31:37.944490 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch I0805 15:31:37.944527 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0805 15:31:37.944691 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for alertmanagers.monitoring.coreos.com I0805 15:31:37.944735 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com I0805 15:31:37.944776 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch I0805 15:31:37.944816 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for prometheusrules.monitoring.coreos.com I0805 15:31:37.944842 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podvolumebackups.velero.io I0805 15:31:37.944932 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for prometheuses.monitoring.coreos.com I0805 15:31:37.944977 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for installplans.operators.coreos.com I0805 15:31:37.945018 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io I0805 15:31:37.945065 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0805 15:31:37.945154 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for apmservers.apm.k8s.elastic.co I0805 15:31:37.945204 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for issuers.cert-manager.io I0805 15:31:37.945228 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for probes.monitoring.coreos.com I0805 15:31:37.945257 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for restores.velero.io I0805 15:31:37.945313 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rabbitmqclusters.rabbitmq.com I0805 15:31:37.945342 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates I0805 15:31:37.945438 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for certificaterequests.cert-manager.io I0805 15:31:37.945559 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for elasticsearches.elasticsearch.k8s.elastic.co I0805 15:31:37.945622 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podvolumerestores.velero.io I0805 15:31:37.945649 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for beats.beat.k8s.elastic.co I0805 15:31:37.945687 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps I0805 15:31:37.945719 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0805 15:31:37.945760 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com I0805 15:31:37.945782 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for schedules.velero.io I0805 15:31:37.945803 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for constrainttemplatepodstatuses.status.gatekeeper.sh I0805 15:31:37.945915 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps I0805 15:31:37.945954 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for certificates.cert-manager.io I0805 15:31:37.945978 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com I0805 15:31:37.946003 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for downloadrequests.velero.io W0805 15:31:37.946046 1 shared_informer.go:494] resyncPeriod 69695759954252 is smaller than resyncCheckPeriod 83782943109866 and the informer has already started. Changing it to 83782943109866 I0805 15:31:37.946278 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts I0805 15:31:37.946384 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps I0805 15:31:37.946418 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for thanosrulers.monitoring.coreos.com I0805 15:31:37.946461 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io I0805 15:31:37.946497 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for etcdrestores.etcd.database.coreos.com I0805 15:31:37.946523 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for orders.acme.cert-manager.io I0805 15:31:37.946543 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for kibanas.kibana.k8s.elastic.co I0805 15:31:37.946568 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for volumesnapshotlocations.velero.io I0805 15:31:37.946596 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for resticrepositories.velero.io I0805 15:31:37.946621 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for constraintpodstatuses.status.gatekeeper.sh I0805 15:31:37.946645 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges I0805 15:31:37.946692 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podmonitors.monitoring.coreos.com I0805 15:31:37.946728 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for alertmanagerconfigs.monitoring.coreos.com I0805 15:31:37.946760 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com I0805 15:31:37.946814 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for servicemonitors.monitoring.coreos.com I0805 15:31:37.947120 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints I0805 15:31:37.947193 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0805 15:31:37.947220 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0805 15:31:37.947247 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serverstatusrequests.velero.io I0805 15:31:37.947270 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps I0805 15:31:37.947291 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I0805 15:31:37.947317 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for challenges.acme.cert-manager.io I0805 15:31:37.947351 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for etcdbackups.etcd.database.coreos.com I0805 15:31:37.947364 1 controllermanager.go:549] Started "resourcequota" I0805 15:31:37.947644 1 resource_quota_controller.go:272] Starting resource quota controller I0805 15:31:37.947657 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0805 15:31:37.947677 1 resource_quota_monitor.go:303] QuotaMonitor running I0805 15:31:37.971038 1 controllermanager.go:549] Started "daemonset" I0805 15:31:37.971118 1 daemon_controller.go:285] Starting daemon sets controller I0805 15:31:37.971150 1 shared_informer.go:240] Waiting for caches to sync for daemon sets I0805 15:31:37.981158 1 node_ipam_controller.go:91] Sending events to api server. I0805 15:31:41.092185 1 request.go:645] Throttling request took 3.048118224s, request: GET:https://127.0.0.1:6443/apis/templates.gatekeeper.sh/v1beta1?timeout=32s I0805 15:31:47.991218 1 range_allocator.go:82] Sending events to api server. I0805 15:31:47.991527 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0805 15:31:47.991633 1 controllermanager.go:549] Started "nodeipam" W0805 15:31:47.991654 1 controllermanager.go:541] Skipping "root-ca-cert-publisher" I0805 15:31:47.991823 1 node_ipam_controller.go:159] Starting ipam controller I0805 15:31:47.991844 1 shared_informer.go:240] Waiting for caches to sync for node I0805 15:31:48.003653 1 controllermanager.go:549] Started "serviceaccount" I0805 15:31:48.003839 1 serviceaccounts_controller.go:117] Starting service account controller I0805 15:31:48.003857 1 shared_informer.go:240] Waiting for caches to sync for service account I0805 15:31:48.014038 1 controllermanager.go:549] Started "replicaset" I0805 15:31:48.014068 1 replica_set.go:182] Starting replicaset controller I0805 15:31:48.014291 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I0805 15:31:48.025115 1 controllermanager.go:549] Started "bootstrapsigner" I0805 15:31:48.025339 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer I0805 15:31:48.035360 1 controllermanager.go:549] Started "persistentvolume-binder" I0805 15:31:48.039154 1 pv_controller_base.go:303] Starting persistent volume controller I0805 15:31:48.039248 1 shared_informer.go:240] Waiting for caches to sync for persistent volume I0805 15:31:48.043803 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0805 15:31:48.103886 1 shared_informer.go:247] Caches are synced for service account I0805 15:31:48.108160 1 shared_informer.go:247] Caches are synced for expand W0805 15:31:48.118233 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="green-m0" does not exist W0805 15:31:48.118384 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="green-m1" does not exist I0805 15:31:48.118554 1 shared_informer.go:247] Caches are synced for PV protection W0805 15:31:48.118647 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="green-m2" does not exist W0805 15:31:48.118665 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="green-m3" does not exist W0805 15:31:48.118799 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="green-m4" does not exist W0805 15:31:48.119139 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n0" does not exist W0805 15:31:48.119251 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n1" does not exist W0805 15:31:48.119367 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n2" does not exist W0805 15:31:48.119382 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n3" does not exist W0805 15:31:48.119559 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n4" does not exist W0805 15:31:48.119951 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="red-n5" does not exist I0805 15:31:48.120161 1 shared_informer.go:247] Caches are synced for TTL I0805 15:31:48.131525 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0805 15:31:48.142274 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0805 15:31:48.143676 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0805 15:31:48.144972 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0805 15:31:48.145829 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0805 15:31:48.147660 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0805 15:31:48.171276 1 shared_informer.go:247] Caches are synced for daemon sets I0805 15:31:48.173375 1 shared_informer.go:247] Caches are synced for namespace I0805 15:31:48.184758 1 shared_informer.go:247] Caches are synced for job I0805 15:31:48.192216 1 shared_informer.go:247] Caches are synced for HPA I0805 15:31:48.192353 1 shared_informer.go:247] Caches are synced for node I0805 15:31:48.192736 1 range_allocator.go:172] Starting range CIDR allocator I0805 15:31:48.192785 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0805 15:31:48.192818 1 shared_informer.go:247] Caches are synced for cidrallocator I0805 15:31:48.192879 1 shared_informer.go:247] Caches are synced for disruption I0805 15:31:48.192896 1 disruption.go:339] Sending events to api server. I0805 15:31:48.193151 1 shared_informer.go:247] Caches are synced for taint I0805 15:31:48.193268 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: Toronto:^@:nova I0805 15:31:48.193273 1 taint_manager.go:187] Starting NoExecuteTaintManager I0805 15:31:48.193348 1 shared_informer.go:247] Caches are synced for deployment W0805 15:31:48.193449 1 node_lifecycle_controller.go:1044] Missing timestamp for Node green-m3. Assuming now as a timestamp. I0805 15:31:48.193959 1 event.go:291] "Event occurred" object="red-n0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n0 event: Registered Node red-n0 in Controller" I0805 15:31:48.193981 1 event.go:291] "Event occurred" object="red-n5" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n5 event: Registered Node red-n5 in Controller" I0805 15:31:48.193992 1 event.go:291] "Event occurred" object="red-n1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n1 event: Registered Node red-n1 in Controller" I0805 15:31:48.194002 1 event.go:291] "Event occurred" object="red-n2" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n2 event: Registered Node red-n2 in Controller" I0805 15:31:48.194043 1 event.go:291] "Event occurred" object="red-n3" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n3 event: Registered Node red-n3 in Controller" I0805 15:31:48.194073 1 event.go:291] "Event occurred" object="green-m3" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node green-m3 event: Registered Node green-m3 in Controller" I0805 15:31:48.194087 1 event.go:291] "Event occurred" object="green-m4" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node green-m4 event: Registered Node green-m4 in Controller" W0805 15:31:48.194426 1 node_lifecycle_controller.go:1044] Missing timestamp for Node green-m4. Assuming now as a timestamp. W0805 15:31:48.194520 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n0. Assuming now as a timestamp. W0805 15:31:48.194600 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n1. Assuming now as a timestamp. W0805 15:31:48.194671 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n2. Assuming now as a timestamp. W0805 15:31:48.194747 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n3. Assuming now as a timestamp. W0805 15:31:48.194784 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n5. Assuming now as a timestamp. W0805 15:31:48.194848 1 node_lifecycle_controller.go:1044] Missing timestamp for Node green-m2. Assuming now as a timestamp. I0805 15:31:48.194869 1 event.go:291] "Event occurred" object="green-m1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node green-m1 event: Registered Node green-m1 in Controller" I0805 15:31:48.194887 1 event.go:291] "Event occurred" object="green-m2" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node green-m2 event: Registered Node green-m2 in Controller" I0805 15:31:48.194900 1 event.go:291] "Event occurred" object="red-n4" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node red-n4 event: Registered Node red-n4 in Controller" W0805 15:31:48.194926 1 node_lifecycle_controller.go:1044] Missing timestamp for Node green-m1. Assuming now as a timestamp. W0805 15:31:48.194964 1 node_lifecycle_controller.go:1044] Missing timestamp for Node red-n4. Assuming now as a timestamp. W0805 15:31:48.195039 1 node_lifecycle_controller.go:1044] Missing timestamp for Node green-m0. Assuming now as a timestamp. I0805 15:31:48.195126 1 node_lifecycle_controller.go:1245] Controller detected that zone Toronto:^@:nova is now in state Normal. I0805 15:31:48.195818 1 event.go:291] "Event occurred" object="green-m0" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node green-m0 event: Registered Node green-m0 in Controller" I0805 15:31:48.202917 1 shared_informer.go:247] Caches are synced for stateful set I0805 15:31:48.202970 1 shared_informer.go:247] Caches are synced for ReplicationController I0805 15:31:48.203026 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0805 15:31:48.203714 1 shared_informer.go:247] Caches are synced for PVC protection I0805 15:31:48.214530 1 shared_informer.go:247] Caches are synced for ReplicaSet I0805 15:31:48.226319 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0805 15:31:48.242810 1 shared_informer.go:247] Caches are synced for GC I0805 15:31:48.243204 1 shared_informer.go:247] Caches are synced for endpoint_slice I0805 15:31:48.243293 1 shared_informer.go:247] Caches are synced for attach detach W0805 15:31:48.243683 1 plugins.go:731] WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release I0805 15:31:48.251808 1 shared_informer.go:247] Caches are synced for endpoint I0805 15:31:48.339383 1 shared_informer.go:247] Caches are synced for persistent volume I0805 15:31:48.483884 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e4bc8692-fb0d-49a4-99bb-73ec32b43c8c" (UniqueName: "kubernetes.io/cinder/ac0322b4-6606-4c3a-a03b-9e2384830c26") on node "red-n2" I0805 15:31:48.483957 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-33770124-075e-446a-b6ec-c8ce7c17fe81" (UniqueName: "kubernetes.io/cinder/1b19b480-19ec-4b47-8bea-e6604244102f") on node "red-n1" I0805 15:31:48.484024 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-dd4e1fad-45bb-4f2e-b007-83a6c1032b2f" (UniqueName: "kubernetes.io/cinder/00fe1e88-26f3-451e-adb4-c2240a0a51d1") on node "red-n0" I0805 15:31:48.484103 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-8c39bc5d-bd5d-4071-a716-0c56f3accb83" (UniqueName: "kubernetes.io/cinder/8af1a7d1-6735-4b2f-9387-96cf8947d48a") on node "red-n2" I0805 15:31:48.484164 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-0885a4e9-ffb5-4460-abf3-da0a7f587216" (UniqueName: "kubernetes.io/cinder/5c627c61-3c0e-4bfe-9ffd-66b6a3b315c7") on node "red-n0" I0805 15:31:48.484192 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-f69af7eb-be1b-4826-bff9-5145bb0aa13f" (UniqueName: "kubernetes.io/cinder/9853e7f1-ef9d-4018-b4cc-4201bd87e990") on node "red-n1" I0805 15:31:48.484214 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-f046f3c0-570b-49b2-b8a7-68a6dc8245b2" (UniqueName: "kubernetes.io/cinder/1f1ebf03-910c-40c4-9dad-3d5b134cdfdd") on node "red-n2" I0805 15:31:48.484236 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-4c8768e9-3b37-4eae-a649-f915947b658d" (UniqueName: "kubernetes.io/cinder/dac208fe-173e-417d-888d-6cda26791120") on node "red-n1" I0805 15:31:48.484261 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-4560ec09-195a-4ab7-ac04-778ab19a75ad" (UniqueName: "kubernetes.io/cinder/121eeb99-3127-4ae4-9090-28fed86df1e2") on node "red-n2" I0805 15:31:48.484291 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-c9d660b4-0490-4f78-87b1-b8f4c343c178" (UniqueName: "kubernetes.io/cinder/d65f35eb-aa19-4383-b0aa-76767df121a2") on node "red-n1" I0805 15:31:48.484313 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e5d1be1e-9652-4b30-92e1-e117d3228e98" (UniqueName: "kubernetes.io/cinder/b810a410-fd5e-4994-8624-554d7c34dde9") on node "red-n0" I0805 15:31:48.484335 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-bb9beb33-041d-4082-8576-4b9354daa825" (UniqueName: "kubernetes.io/cinder/99e6acf6-8d34-467d-b764-5812137dd417") on node "red-n2" I0805 15:31:48.484364 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-1978b0bc-cf14-4517-9508-adb10086a1e6" (UniqueName: "kubernetes.io/cinder/9211797d-e8d3-432a-82ee-9c73c0f34d63") on node "red-n0" I0805 15:31:48.484388 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-f9953cd9-b1b9-48f9-8bce-7d3d94c31c76" (UniqueName: "kubernetes.io/cinder/a92b3315-d051-4107-8b37-f60ed3aa84d4") on node "red-n1" I0805 15:31:48.484410 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-6a23160f-fd86-4783-a8c6-79b54eebde2a" (UniqueName: "kubernetes.io/cinder/4dbc5d60-64fe-43ea-9883-19b5f8fe6e9e") on node "red-n1" I0805 15:31:48.484433 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-d22e4a75-54c0-4216-9591-08cfe584dac1" (UniqueName: "kubernetes.io/cinder/d151f75d-c488-4f5d-b1f1-fd46259861fa") on node "red-n1" I0805 15:31:48.484468 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-c2f4f6fd-6477-4385-99bc-49a0c22b3d28" (UniqueName: "kubernetes.io/cinder/8eb4a15c-b4da-4d29-8516-47c5b5256f6e") on node "red-n2" I0805 15:31:48.484494 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-0c94465f-9c61-4fde-80b2-f15218b630c1" (UniqueName: "kubernetes.io/cinder/111da4b9-9903-415b-b1ad-80b6b4629822") on node "red-n0" I0805 15:31:48.484521 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-32abd9dd-c9a5-4580-879b-3bdd1ae46e78" (UniqueName: "kubernetes.io/cinder/cffc97d0-a691-4931-a4fa-0963dd0ec430") on node "red-n2" I0805 15:31:48.493140 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-e5d1be1e-9652-4b30-92e1-e117d3228e98" (UniqueName: "kubernetes.io/cinder/b810a410-fd5e-4994-8624-554d7c34dde9") on node "red-n0" I0805 15:31:48.493295 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-4c8768e9-3b37-4eae-a649-f915947b658d" (UniqueName: "kubernetes.io/cinder/dac208fe-173e-417d-888d-6cda26791120") on node "red-n1" I0805 15:31:48.493569 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-6a23160f-fd86-4783-a8c6-79b54eebde2a" (UniqueName: "kubernetes.io/cinder/4dbc5d60-64fe-43ea-9883-19b5f8fe6e9e") on node "red-n1" I0805 15:31:48.493973 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-33770124-075e-446a-b6ec-c8ce7c17fe81" (UniqueName: "kubernetes.io/cinder/1b19b480-19ec-4b47-8bea-e6604244102f") on node "red-n1" I0805 15:31:48.494192 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-4560ec09-195a-4ab7-ac04-778ab19a75ad" (UniqueName: "kubernetes.io/cinder/121eeb99-3127-4ae4-9090-28fed86df1e2") on node "red-n2" I0805 15:31:48.494462 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-0c94465f-9c61-4fde-80b2-f15218b630c1" (UniqueName: "kubernetes.io/cinder/111da4b9-9903-415b-b1ad-80b6b4629822") on node "red-n0" I0805 15:31:48.494510 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-bb9beb33-041d-4082-8576-4b9354daa825" (UniqueName: "kubernetes.io/cinder/99e6acf6-8d34-467d-b764-5812137dd417") on node "red-n2" I0805 15:31:48.494909 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-c2f4f6fd-6477-4385-99bc-49a0c22b3d28" (UniqueName: "kubernetes.io/cinder/8eb4a15c-b4da-4d29-8516-47c5b5256f6e") on node "red-n2" I0805 15:31:48.495028 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-d22e4a75-54c0-4216-9591-08cfe584dac1" (UniqueName: "kubernetes.io/cinder/d151f75d-c488-4f5d-b1f1-fd46259861fa") on node "red-n1" I0805 15:31:48.495136 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-c9d660b4-0490-4f78-87b1-b8f4c343c178" (UniqueName: "kubernetes.io/cinder/d65f35eb-aa19-4383-b0aa-76767df121a2") on node "red-n1" I0805 15:31:48.495343 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-e4bc8692-fb0d-49a4-99bb-73ec32b43c8c" (UniqueName: "kubernetes.io/cinder/ac0322b4-6606-4c3a-a03b-9e2384830c26") on node "red-n2" I0805 15:31:48.495380 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-dd4e1fad-45bb-4f2e-b007-83a6c1032b2f" (UniqueName: "kubernetes.io/cinder/00fe1e88-26f3-451e-adb4-c2240a0a51d1") on node "red-n0" I0805 15:31:48.495645 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-8c39bc5d-bd5d-4071-a716-0c56f3accb83" (UniqueName: "kubernetes.io/cinder/8af1a7d1-6735-4b2f-9387-96cf8947d48a") on node "red-n2" I0805 15:31:48.495813 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-0885a4e9-ffb5-4460-abf3-da0a7f587216" (UniqueName: "kubernetes.io/cinder/5c627c61-3c0e-4bfe-9ffd-66b6a3b315c7") on node "red-n0" I0805 15:31:48.496016 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-1978b0bc-cf14-4517-9508-adb10086a1e6" (UniqueName: "kubernetes.io/cinder/9211797d-e8d3-432a-82ee-9c73c0f34d63") on node "red-n0" I0805 15:31:48.496232 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-32abd9dd-c9a5-4580-879b-3bdd1ae46e78" (UniqueName: "kubernetes.io/cinder/cffc97d0-a691-4931-a4fa-0963dd0ec430") on node "red-n2" I0805 15:31:48.496428 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-f69af7eb-be1b-4826-bff9-5145bb0aa13f" (UniqueName: "kubernetes.io/cinder/9853e7f1-ef9d-4018-b4cc-4201bd87e990") on node "red-n1" I0805 15:31:48.496633 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-f046f3c0-570b-49b2-b8a7-68a6dc8245b2" (UniqueName: "kubernetes.io/cinder/1f1ebf03-910c-40c4-9dad-3d5b134cdfdd") on node "red-n2" I0805 15:31:48.496853 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-f9953cd9-b1b9-48f9-8bce-7d3d94c31c76" (UniqueName: "kubernetes.io/cinder/a92b3315-d051-4107-8b37-f60ed3aa84d4") on node "red-n1" I0805 15:31:48.543922 1 shared_informer.go:247] Caches are synced for resource quota I0805 15:31:48.547821 1 shared_informer.go:247] Caches are synced for resource quota I0805 15:31:50.051638 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0805 15:31:50.142816 1 shared_informer.go:247] Caches are synced for garbage collector I0805 15:31:50.142869 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0805 15:31:50.151785 1 shared_informer.go:247] Caches are synced for garbage collector I0805 15:31:50.718740 1 attacher.go:404] detached volume "121eeb99-3127-4ae4-9090-28fed86df1e2" from node "red-n2" I0805 15:31:50.718796 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-4560ec09-195a-4ab7-ac04-778ab19a75ad" (UniqueName: "kubernetes.io/cinder/121eeb99-3127-4ae4-9090-28fed86df1e2") on node "red-n2" I0805 15:31:50.738637 1 attacher.go:404] detached volume "9211797d-e8d3-432a-82ee-9c73c0f34d63" from node "red-n0" I0805 15:31:50.738684 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-1978b0bc-cf14-4517-9508-adb10086a1e6" (UniqueName: "kubernetes.io/cinder/9211797d-e8d3-432a-82ee-9c73c0f34d63") on node "red-n0" I0805 15:31:50.809453 1 attacher.go:404] detached volume "1b19b480-19ec-4b47-8bea-e6604244102f" from node "red-n1" I0805 15:31:50.809492 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-33770124-075e-446a-b6ec-c8ce7c17fe81" (UniqueName: "kubernetes.io/cinder/1b19b480-19ec-4b47-8bea-e6604244102f") on node "red-n1" I0805 15:31:51.905901 1 request.go:645] Throttling request took 1.000792099s, request: GET:https://127.0.0.1:6443/apis/batch/v1beta1/namespaces/kafka1/cronjobs/test-generator I0805 15:31:55.768841 1 attacher.go:404] detached volume "111da4b9-9903-415b-b1ad-80b6b4629822" from node "red-n0" I0805 15:31:55.769017 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-0c94465f-9c61-4fde-80b2-f15218b630c1" (UniqueName: "kubernetes.io/cinder/111da4b9-9903-415b-b1ad-80b6b4629822") on node "red-n0" I0805 15:31:55.772763 1 attacher.go:404] detached volume "d151f75d-c488-4f5d-b1f1-fd46259861fa" from node "red-n1" I0805 15:31:55.773184 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-d22e4a75-54c0-4216-9591-08cfe584dac1" (UniqueName: "kubernetes.io/cinder/d151f75d-c488-4f5d-b1f1-fd46259861fa") on node "red-n1" I0805 15:31:55.773494 1 attacher.go:404] detached volume "8af1a7d1-6735-4b2f-9387-96cf8947d48a" from node "red-n2" I0805 15:31:55.773522 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-8c39bc5d-bd5d-4071-a716-0c56f3accb83" (UniqueName: "kubernetes.io/cinder/8af1a7d1-6735-4b2f-9387-96cf8947d48a") on node "red-n2" I0805 15:31:55.780058 1 attacher.go:404] detached volume "8eb4a15c-b4da-4d29-8516-47c5b5256f6e" from node "red-n2" I0805 15:31:55.780090 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-c2f4f6fd-6477-4385-99bc-49a0c22b3d28" (UniqueName: "kubernetes.io/cinder/8eb4a15c-b4da-4d29-8516-47c5b5256f6e") on node "red-n2" I0805 15:31:55.783036 1 attacher.go:404] detached volume "a92b3315-d051-4107-8b37-f60ed3aa84d4" from node "red-n1" I0805 15:31:55.783080 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-f9953cd9-b1b9-48f9-8bce-7d3d94c31c76" (UniqueName: "kubernetes.io/cinder/a92b3315-d051-4107-8b37-f60ed3aa84d4") on node "red-n1" I0805 15:31:55.786115 1 attacher.go:404] detached volume "ac0322b4-6606-4c3a-a03b-9e2384830c26" from node "red-n2" I0805 15:31:55.786152 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-e4bc8692-fb0d-49a4-99bb-73ec32b43c8c" (UniqueName: "kubernetes.io/cinder/ac0322b4-6606-4c3a-a03b-9e2384830c26") on node "red-n2" I0805 15:31:55.787643 1 attacher.go:404] detached volume "9853e7f1-ef9d-4018-b4cc-4201bd87e990" from node "red-n1" I0805 15:31:55.787674 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-f69af7eb-be1b-4826-bff9-5145bb0aa13f" (UniqueName: "kubernetes.io/cinder/9853e7f1-ef9d-4018-b4cc-4201bd87e990") on node "red-n1" I0805 15:31:55.788001 1 attacher.go:404] detached volume "4dbc5d60-64fe-43ea-9883-19b5f8fe6e9e" from node "red-n1" I0805 15:31:55.788027 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-6a23160f-fd86-4783-a8c6-79b54eebde2a" (UniqueName: "kubernetes.io/cinder/4dbc5d60-64fe-43ea-9883-19b5f8fe6e9e") on node "red-n1" I0805 15:31:55.793032 1 attacher.go:404] detached volume "1f1ebf03-910c-40c4-9dad-3d5b134cdfdd" from node "red-n2" I0805 15:31:55.793058 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-f046f3c0-570b-49b2-b8a7-68a6dc8245b2" (UniqueName: "kubernetes.io/cinder/1f1ebf03-910c-40c4-9dad-3d5b134cdfdd") on node "red-n2" I0805 15:31:55.803471 1 attacher.go:404] detached volume "d65f35eb-aa19-4383-b0aa-76767df121a2" from node "red-n1" I0805 15:31:55.803503 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-c9d660b4-0490-4f78-87b1-b8f4c343c178" (UniqueName: "kubernetes.io/cinder/d65f35eb-aa19-4383-b0aa-76767df121a2") on node "red-n1" I0805 15:31:55.824283 1 attacher.go:404] detached volume "00fe1e88-26f3-451e-adb4-c2240a0a51d1" from node "red-n0" I0805 15:31:55.824326 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-dd4e1fad-45bb-4f2e-b007-83a6c1032b2f" (UniqueName: "kubernetes.io/cinder/00fe1e88-26f3-451e-adb4-c2240a0a51d1") on node "red-n0" I0805 15:31:55.829964 1 attacher.go:404] detached volume "dac208fe-173e-417d-888d-6cda26791120" from node "red-n1" I0805 15:31:55.829997 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-4c8768e9-3b37-4eae-a649-f915947b658d" (UniqueName: "kubernetes.io/cinder/dac208fe-173e-417d-888d-6cda26791120") on node "red-n1" I0805 15:31:55.836800 1 attacher.go:404] detached volume "cffc97d0-a691-4931-a4fa-0963dd0ec430" from node "red-n2" I0805 15:31:55.836830 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-32abd9dd-c9a5-4580-879b-3bdd1ae46e78" (UniqueName: "kubernetes.io/cinder/cffc97d0-a691-4931-a4fa-0963dd0ec430") on node "red-n2" I0805 15:31:55.843135 1 attacher.go:404] detached volume "99e6acf6-8d34-467d-b764-5812137dd417" from node "red-n2" I0805 15:31:55.843168 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-bb9beb33-041d-4082-8576-4b9354daa825" (UniqueName: "kubernetes.io/cinder/99e6acf6-8d34-467d-b764-5812137dd417") on node "red-n2" I0805 15:31:55.910827 1 attacher.go:404] detached volume "b810a410-fd5e-4994-8624-554d7c34dde9" from node "red-n0" I0805 15:31:55.910877 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-e5d1be1e-9652-4b30-92e1-e117d3228e98" (UniqueName: "kubernetes.io/cinder/b810a410-fd5e-4994-8624-554d7c34dde9") on node "red-n0" I0805 15:31:55.935881 1 attacher.go:404] detached volume "5c627c61-3c0e-4bfe-9ffd-66b6a3b315c7" from node "red-n0" I0805 15:31:55.935915 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-0885a4e9-ffb5-4460-abf3-da0a7f587216" (UniqueName: "kubernetes.io/cinder/5c627c61-3c0e-4bfe-9ffd-66b6a3b315c7") on node "red-n0" I0805 15:32:02.896841 1 event.go:291] "Event occurred" object="kafka/test-generator" kind="CronJob" apiVersion="batch/v1beta1" type="Normal" reason="SuccessfulCreate" message="Created job test-generator-1628177520" I0805 15:32:02.905975 1 cronjob_controller.go:190] Unable to update status for kafka/test-generator (rv = 278737996): Operation cannot be fulfilled on cronjobs.batch "test-generator": the object has been modified; please apply your changes to the latest version and try again I0805 15:32:02.921272 1 event.go:291] "Event occurred" object="kafka1/test-generator" kind="CronJob" apiVersion="batch/v1beta1" type="Normal" reason="SuccessfulCreate" message="Created job test-generator-1628177520" I0805 15:32:02.932081 1 event.go:291] "Event occurred" object="kafka/test-generator-1628177520" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-generator-1628177520-bc8qv" I0805 15:32:02.949496 1 cronjob_controller.go:190] Unable to update status for kafka1/test-generator (rv = 278737998): Operation cannot be fulfilled on cronjobs.batch "test-generator": the object has been modified; please apply your changes to the latest version and try again I0805 15:32:02.958784 1 event.go:291] "Event occurred" object="kafka1/test-generator-1628177520" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-generator-1628177520-p5cbf" I0805 15:32:04.948823 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-2003b813-73f8-42b2-b9f6-71c72290e53d" (UniqueName: "kubernetes.io/cinder/99990775-76a6-44e7-b543-54d136faaa4f") on node "green-m3" I0805 15:32:04.948888 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5cae518d-e8d8-4a0b-ad4c-004518e50995" (UniqueName: "kubernetes.io/cinder/dd18d4fd-c5db-4f88-bbd1-d28b34d077ce") on node "green-m3" I0805 15:32:04.948968 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e4c7f449-bc22-4d14-ab1e-173be832855e" (UniqueName: "kubernetes.io/cinder/de7dea41-6f35-44fc-9c5a-023df7a8f2b6") on node "green-m3" I0805 15:32:04.948990 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-70619716-e08e-4d26-8d09-a8d84766694c" (UniqueName: "kubernetes.io/cinder/1fcae8f0-0dfb-4918-a534-a0be52f75d95") on node "green-m3" I0805 15:32:04.954207 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-2003b813-73f8-42b2-b9f6-71c72290e53d" (UniqueName: "kubernetes.io/cinder/99990775-76a6-44e7-b543-54d136faaa4f") on node "green-m3" I0805 15:32:04.954253 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-5cae518d-e8d8-4a0b-ad4c-004518e50995" (UniqueName: "kubernetes.io/cinder/dd18d4fd-c5db-4f88-bbd1-d28b34d077ce") on node "green-m3" I0805 15:32:04.954444 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-e4c7f449-bc22-4d14-ab1e-173be832855e" (UniqueName: "kubernetes.io/cinder/de7dea41-6f35-44fc-9c5a-023df7a8f2b6") on node "green-m3" I0805 15:32:04.957793 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-70619716-e08e-4d26-8d09-a8d84766694c" (UniqueName: "kubernetes.io/cinder/1fcae8f0-0dfb-4918-a534-a0be52f75d95") on node "green-m3" I0805 15:32:05.550846 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-077d8890-e6f2-41ee-addc-1a181f3f4401" (UniqueName: "kubernetes.io/cinder/d5e3ab6b-4e06-4016-b7b4-59a63130c6df") on node "green-m0" I0805 15:32:05.550964 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e27fae65-4772-4022-b026-11da61f5edb9" (UniqueName: "kubernetes.io/cinder/c486875e-58de-4fbf-a772-ce5fd181bbe9") on node "green-m0" I0805 15:32:05.551038 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-291507ef-3bfb-4e34-a8d8-746cfc2c919c" (UniqueName: "kubernetes.io/cinder/718bfbdc-3f14-495e-8aaa-d0149885af88") on node "green-m0" I0805 15:32:05.556690 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-291507ef-3bfb-4e34-a8d8-746cfc2c919c" (UniqueName: "kubernetes.io/cinder/718bfbdc-3f14-495e-8aaa-d0149885af88") on node "green-m0" I0805 15:32:05.556980 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-077d8890-e6f2-41ee-addc-1a181f3f4401" (UniqueName: "kubernetes.io/cinder/d5e3ab6b-4e06-4016-b7b4-59a63130c6df") on node "green-m0" I0805 15:32:05.557518 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-e27fae65-4772-4022-b026-11da61f5edb9" (UniqueName: "kubernetes.io/cinder/c486875e-58de-4fbf-a772-ce5fd181bbe9") on node "green-m0" I0805 15:32:06.981581 1 attacher.go:404] detached volume "de7dea41-6f35-44fc-9c5a-023df7a8f2b6" from node "green-m3" I0805 15:32:06.981620 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-e4c7f449-bc22-4d14-ab1e-173be832855e" (UniqueName: "kubernetes.io/cinder/de7dea41-6f35-44fc-9c5a-023df7a8f2b6") on node "green-m3" I0805 15:32:07.540083 1 attacher.go:404] detached volume "718bfbdc-3f14-495e-8aaa-d0149885af88" from node "green-m0" I0805 15:32:07.540124 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-291507ef-3bfb-4e34-a8d8-746cfc2c919c" (UniqueName: "kubernetes.io/cinder/718bfbdc-3f14-495e-8aaa-d0149885af88") on node "green-m0" I0805 15:32:07.813082 1 attacher.go:404] detached volume "d5e3ab6b-4e06-4016-b7b4-59a63130c6df" from node "green-m0" I0805 15:32:07.813117 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-077d8890-e6f2-41ee-addc-1a181f3f4401" (UniqueName: "kubernetes.io/cinder/d5e3ab6b-4e06-4016-b7b4-59a63130c6df") on node "green-m0" I0805 15:32:08.233792 1 attacher.go:404] detached volume "dd18d4fd-c5db-4f88-bbd1-d28b34d077ce" from node "green-m3" I0805 15:32:08.233827 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-5cae518d-e8d8-4a0b-ad4c-004518e50995" (UniqueName: "kubernetes.io/cinder/dd18d4fd-c5db-4f88-bbd1-d28b34d077ce") on node "green-m3" W0805 15:32:08.257463 1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kafka-dr/zookeeper-headless", retrying. Error: EndpointSlice informer cache is out of date I0805 15:32:08.497621 1 attacher.go:404] detached volume "99990775-76a6-44e7-b543-54d136faaa4f" from node "green-m3" I0805 15:32:08.497657 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-2003b813-73f8-42b2-b9f6-71c72290e53d" (UniqueName: "kubernetes.io/cinder/99990775-76a6-44e7-b543-54d136faaa4f") on node "green-m3" I0805 15:32:09.405312 1 attacher.go:404] detached volume "c486875e-58de-4fbf-a772-ce5fd181bbe9" from node "green-m0" I0805 15:32:09.405349 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-e27fae65-4772-4022-b026-11da61f5edb9" (UniqueName: "kubernetes.io/cinder/c486875e-58de-4fbf-a772-ce5fd181bbe9") on node "green-m0" I0805 15:32:10.076252 1 attacher.go:404] detached volume "1fcae8f0-0dfb-4918-a534-a0be52f75d95" from node "green-m3" I0805 15:32:10.076296 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-70619716-e08e-4d26-8d09-a8d84766694c" (UniqueName: "kubernetes.io/cinder/1fcae8f0-0dfb-4918-a534-a0be52f75d95") on node "green-m3" I0805 15:32:19.594280 1 request.go:645] Throttling request took 1.041877847s, request: GET:https://127.0.0.1:6443/apis/apm.k8s.elastic.co/v1?timeout=32s I0805 15:32:33.339955 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5e2b8bfc-9698-4b66-b4f9-e2d9181df37e" (UniqueName: "kubernetes.io/cinder/0e432a93-3a92-4ce6-a8a9-0c2d557b24bf") on node "green-m4" I0805 15:32:33.344646 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-5e2b8bfc-9698-4b66-b4f9-e2d9181df37e" (UniqueName: "kubernetes.io/cinder/0e432a93-3a92-4ce6-a8a9-0c2d557b24bf") on node "green-m4" I0805 15:32:35.493115 1 attacher.go:404] detached volume "0e432a93-3a92-4ce6-a8a9-0c2d557b24bf" from node "green-m4" I0805 15:32:35.493225 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-5e2b8bfc-9698-4b66-b4f9-e2d9181df37e" (UniqueName: "kubernetes.io/cinder/0e432a93-3a92-4ce6-a8a9-0c2d557b24bf") on node "green-m4" I0805 15:32:37.355917 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-4afbd02f-ed8e-4afb-aa31-20b106490d36" (UniqueName: "kubernetes.io/cinder/1fdb5440-26a9-4253-901d-a5511d60fe47") on node "green-m1" I0805 15:32:37.355966 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-d175067a-82d3-411e-b169-4a3e359c09d1" (UniqueName: "kubernetes.io/cinder/a6963ad8-fe77-4340-9e84-a2922ed5f8e5") on node "green-m1" I0805 15:32:37.355989 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e5046237-1e02-4f93-b867-eda7a90901d8" (UniqueName: "kubernetes.io/cinder/8a6c2e6d-5d46-4866-b27d-c055369d6538") on node "green-m1" I0805 15:32:37.356011 1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-cd9adeb6-2345-4673-a6cf-e37826bace6b" (UniqueName: "kubernetes.io/cinder/c1e6bbde-8442-4af5-b41f-4a088dc222cb") on node "green-m1" I0805 15:32:37.360030 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-e5046237-1e02-4f93-b867-eda7a90901d8" (UniqueName: "kubernetes.io/cinder/8a6c2e6d-5d46-4866-b27d-c055369d6538") on node "green-m1" I0805 15:32:37.361014 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-cd9adeb6-2345-4673-a6cf-e37826bace6b" (UniqueName: "kubernetes.io/cinder/c1e6bbde-8442-4af5-b41f-4a088dc222cb") on node "green-m1" I0805 15:32:37.361080 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-4afbd02f-ed8e-4afb-aa31-20b106490d36" (UniqueName: "kubernetes.io/cinder/1fdb5440-26a9-4253-901d-a5511d60fe47") on node "green-m1" I0805 15:32:37.364698 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-d175067a-82d3-411e-b169-4a3e359c09d1" (UniqueName: "kubernetes.io/cinder/a6963ad8-fe77-4340-9e84-a2922ed5f8e5") on node "green-m1" I0805 15:32:37.361080 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-4afbd02f-ed8e-4afb-aa31-20b106490d36" (UniqueName: "kubernetes.io/cinder/1fdb5440-26a9-4253-901d-a5511d60fe47") o n node "green-m1" I0805 15:32:37.364698 1 operation_generator.go:1400] Verified volume is safe to detach for volume "pvc-d175067a-82d3-411e-b169-4a3e359c09d1" (UniqueName: "kubernetes.io/cinder/a6963ad8-fe77-4340-9e84-a2922ed5f8e5") o n node "green-m1" I0805 15:32:39.414328 1 attacher.go:404] detached volume "c1e6bbde-8442-4af5-b41f-4a088dc222cb" from node "green-m1" I0805 15:32:39.414381 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-cd9adeb6-2345-4673-a6cf-e37826bace6b" (UniqueName: "kubernetes.io/cinder/c1e6bbde-8442-4af5-b41f-4a088dc222cb") on node "green-m1" I0805 15:32:40.945820 1 attacher.go:404] detached volume "1fdb5440-26a9-4253-901d-a5511d60fe47" from node "green-m1" I0805 15:32:40.945857 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-4afbd02f-ed8e-4afb-aa31-20b106490d36" (UniqueName: "kubernetes.io/cinder/1fdb5440-26a9-4253-901d-a5511d60fe47") on node "green-m1" I0805 15:32:40.981374 1 attacher.go:404] detached volume "8a6c2e6d-5d46-4866-b27d-c055369d6538" from node "green-m1" I0805 15:32:40.981405 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-e5046237-1e02-4f93-b867-eda7a90901d8" (UniqueName: "kubernetes.io/cinder/8a6c2e6d-5d46-4866-b27d-c055369d6538") on node "green-m1" I0805 15:32:42.550119 1 attacher.go:404] detached volume "a6963ad8-fe77-4340-9e84-a2922ed5f8e5" from node "green-m1" I0805 15:32:42.550161 1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-d175067a-82d3-411e-b169-4a3e359c09d1" (UniqueName: "kubernetes.io/cinder/a6963ad8-fe77-4340-9e84-a2922ed5f8e5") on node "green-m1"

neolit123 commented 3 years ago

/sig storage

henro001 commented 3 years ago

I would like to follow up on this.

henro001 commented 3 years ago

Looks like this is known, upgraded 1.20.7 to 1.20.10 and that resolved the issue https://github.com/kubernetes/kubernetes/pull/101737