Closed zzguang520 closed 3 years ago
/triage support
Hey @zzguang520 thank you for opening this issue! I'm going to close this as a duplicate of https://github.com/kubernetes/minikube/issues/9322, where a fix for this is being tracked. We currently have a PR open as well which should resolve the issue: https://github.com/kubernetes/minikube/pull/9577
Steps to reproduce the issue:
1.minikube start --vm=true --driver=none --kubernetes-version=v1.19.2
3.minikube version
4.minikube addons enable ingress
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:restrict output to defined facilities * -H, --human human readable output * -k, --kernel display kernel messages * -L, --color colorize messages * -l, --level
restrict output to defined levels * -n, --console-level set level of messages printed to console
* -P, --nopager do not pipe output into a pager
* -r, --raw print the raw message buffer
* -S, --syslog force to use syslog(2) rather than /dev/kmsg
* -s, --buffer-size buffer size to query the kernel ring buffer
* -T, --ctime show human readable timestamp (could be
* inaccurate if you have used SUSPEND/RESUME)
* -t, --notime don't print messages timestamp
* -u, --userspace display userspace messages
* -w, --follow wait for new messages
* -x, --decode decode facility and level to readable string
*
* -h, --help display this help and exit
* -V, --version output version information and exit
*
* Supported log facilities:
* kern - kernel messages
* user - random user-level messages
* mail - mail system
* daemon - system daemons
* auth - security/authorization messages
* syslog - messages generated internally by syslogd
* lpr - line printer subsystem
* news - network news subsystem
*
* Supported log levels (priorities):
* emerg - system is unusable
* alert - action must be taken immediately
* crit - critical conditions
* err - error conditions
* warn - warning conditions
* notice - normal but significant condition
* info - informational
* debug - debug-level messages
*
*
* For more details see dmesg(q).
*
* ==> etcd [07809560c5db] <==
* 2020-09-22 08:34:28.435935 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" " with result "range_response_count:0 size:4" took too long (101.169491ms) to execute
* 2020-09-22 08:34:29.067917 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::extension-apiserver-authentication-reader\" " with result "range_response_count:0 size:5" took too long (139.718623ms) to execute
* 2020-09-22 08:34:34.000774 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" " with result "range_response_count:1 size:199" took too long (108.025665ms) to execute
* 2020-09-22 08:34:40.756038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:34:47.791511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:34:49.442019 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (123.232482ms) to execute
* 2020-09-22 08:34:57.791567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:34:59.130812 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:612" took too long (158.965364ms) to execute
* 2020-09-22 08:35:07.791536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:35:09.464982 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (167.972379ms) to execute
* 2020-09-22 08:35:09.465073 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (145.668414ms) to execute
* 2020-09-22 08:35:17.428061 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (108.914788ms) to execute
* 2020-09-22 08:35:17.791703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:35:27.791627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:35:31.422122 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (103.495748ms) to execute
* 2020-09-22 08:35:37.791798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:35:47.791680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:35:57.791600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:07.791720 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:09.495033 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (176.537814ms) to execute
* 2020-09-22 08:36:17.791716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:19.468230 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (150.030664ms) to execute
* 2020-09-22 08:36:27.791572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:37.791730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:41.914548 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (145.138013ms) to execute
* 2020-09-22 08:36:47.791580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:36:57.858633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:07.882508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:17.791666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:26.203791 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (166.643241ms) to execute
* 2020-09-22 08:37:26.203846 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (137.642762ms) to execute
* 2020-09-22 08:37:26.204059 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (167.879667ms) to execute
* 2020-09-22 08:37:27.791593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:37.795351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:47.791686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:37:57.791714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:07.791682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:17.791700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:27.791849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:37.791523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:47.791449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:38:57.791793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:07.791666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:17.791522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:19.446749 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (128.079564ms) to execute
* 2020-09-22 08:39:27.791595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:37.791634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:47.791692 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:39:49.457466 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (158.194756ms) to execute
* 2020-09-22 08:39:49.457576 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (139.433529ms) to execute
* 2020-09-22 08:39:57.791706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:07.791679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:17.791695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:27.791686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:37.791600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:47.791632 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:40:57.791609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:41:07.791492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:41:17.791513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-09-22 08:41:27.791656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> kernel <==
* 16:41:36 up 42 days, 22:05, 3 users, load average: 0.42, 0.52, 0.49
* Linux oc2542575527.ibm.com 3.10.0-1062.18.1.el7.x86_64 #1 SMP Wed Feb 12 14:08:31 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Red Hat Enterprise Linux Workstation 7.7 (Maipo)"
*
* ==> kube-apiserver [53a7d57e73ec] <==
* I0922 08:34:25.703852 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
* I0922 08:34:25.703865 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
* I0922 08:34:25.703897 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0922 08:34:25.703943 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0922 08:34:25.704135 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
* E0922 08:34:25.704725 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/9.110.168.156, ResourceVersion: 0, AdditionalErrorMsg:
* I0922 08:34:25.803426 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I0922 08:34:25.803462 1 shared_informer.go:247] Caches are synced for crd-autoregister
* I0922 08:34:25.803471 1 cache.go:39] Caches are synced for autoregister controller
* I0922 08:34:25.803508 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0922 08:34:25.804026 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
* I0922 08:34:26.702448 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0922 08:34:26.702480 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0922 08:34:26.707737 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
* I0922 08:34:26.770443 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
* I0922 08:34:26.770467 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0922 08:34:28.800773 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0922 08:34:29.071641 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W0922 08:34:29.431598 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [9.110.168.156]
* I0922 08:34:29.432805 1 controller.go:606] quota admission added evaluator for: endpoints
* I0922 08:34:29.470521 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0922 08:34:30.128458 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0922 08:34:31.198769 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0922 08:34:31.324624 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0922 08:34:37.243069 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0922 08:34:37.296105 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* I0922 08:34:37.834531 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I0922 08:34:54.799516 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:34:54.799576 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:34:54.799593 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:35:26.794609 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:35:26.795214 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:35:26.795242 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:36:11.351713 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:36:11.351772 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:36:11.351787 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:36:42.227429 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:36:42.227479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:36:42.227495 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:37:23.688179 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:37:23.688248 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:37:23.688266 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:37:55.290241 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:37:55.290299 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:37:55.290316 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:38:36.460220 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:38:36.461301 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:38:36.461322 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:39:06.983204 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:39:06.983262 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:39:06.983277 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:39:42.859157 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:39:42.859219 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:39:42.859242 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:40:25.594981 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:40:25.595037 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:40:25.595054 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I0922 08:41:01.466385 1 client.go:360] parsed scheme: "passthrough"
* I0922 08:41:01.466942 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
* I0922 08:41:01.466960 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-controller-manager [c540fc1c7b70] <==
* I0922 08:34:36.491894 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
* I0922 08:34:36.491918 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
* I0922 08:34:36.741645 1 controllermanager.go:549] Started "pv-protection"
* I0922 08:34:36.741717 1 pv_protection_controller.go:83] Starting PV protection controller
* I0922 08:34:36.741734 1 shared_informer.go:240] Waiting for caches to sync for PV protection
* I0922 08:34:36.891515 1 controllermanager.go:549] Started "csrcleaner"
* I0922 08:34:36.891571 1 core.go:240] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
* W0922 08:34:36.891584 1 controllermanager.go:541] Skipping "route"
* I0922 08:34:36.891618 1 cleaner.go:83] Starting CSR cleaner controller
* I0922 08:34:37.041943 1 node_lifecycle_controller.go:77] Sending events to api server
* E0922 08:34:37.042007 1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
* W0922 08:34:37.042032 1 controllermanager.go:541] Skipping "cloud-node-lifecycle"
* I0922 08:34:37.042581 1 shared_informer.go:240] Waiting for caches to sync for resource quota
* W0922 08:34:37.054106 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="oc2542575527.ibm.com" does not exist
* I0922 08:34:37.091829 1 shared_informer.go:247] Caches are synced for service account
* I0922 08:34:37.091993 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
* I0922 08:34:37.111316 1 shared_informer.go:247] Caches are synced for bootstrap_signer
* I0922 08:34:37.112737 1 shared_informer.go:247] Caches are synced for namespace
* I0922 08:34:37.119764 1 shared_informer.go:247] Caches are synced for expand
* I0922 08:34:37.141833 1 shared_informer.go:247] Caches are synced for PV protection
* I0922 08:34:37.141896 1 shared_informer.go:247] Caches are synced for TTL
* I0922 08:34:37.158344 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
* I0922 08:34:37.177304 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
* I0922 08:34:37.177622 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
* I0922 08:34:37.177824 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
* I0922 08:34:37.178210 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
* E0922 08:34:37.213151 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
* I0922 08:34:37.239702 1 shared_informer.go:247] Caches are synced for disruption
* I0922 08:34:37.239719 1 disruption.go:339] Sending events to api server.
* I0922 08:34:37.240436 1 shared_informer.go:247] Caches are synced for HPA
* I0922 08:34:37.240992 1 shared_informer.go:247] Caches are synced for deployment
* I0922 08:34:37.241615 1 shared_informer.go:247] Caches are synced for ReplicationController
* I0922 08:34:37.241858 1 shared_informer.go:247] Caches are synced for GC
* I0922 08:34:37.242039 1 shared_informer.go:247] Caches are synced for job
* I0922 08:34:37.242265 1 shared_informer.go:247] Caches are synced for persistent volume
* I0922 08:34:37.249823 1 shared_informer.go:247] Caches are synced for ReplicaSet
* I0922 08:34:37.254851 1 shared_informer.go:247] Caches are synced for PVC protection
* I0922 08:34:37.262383 1 shared_informer.go:247] Caches are synced for attach detach
* I0922 08:34:37.264271 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
* I0922 08:34:37.287166 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-j7bdn"
* I0922 08:34:37.290924 1 shared_informer.go:247] Caches are synced for stateful set
* I0922 08:34:37.292168 1 shared_informer.go:247] Caches are synced for daemon sets
* I0922 08:34:37.302947 1 shared_informer.go:247] Caches are synced for resource quota
* I0922 08:34:37.314934 1 shared_informer.go:247] Caches are synced for taint
* I0922 08:34:37.315001 1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0922 08:34:37.315013 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
* W0922 08:34:37.315090 1 node_lifecycle_controller.go:1044] Missing timestamp for Node oc2542575527.ibm.com. Assuming now as a timestamp.
* I0922 08:34:37.315133 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
* I0922 08:34:37.315192 1 event.go:291] "Event occurred" object="oc2542575527.ibm.com" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node oc2542575527.ibm.com event: Registered Node oc2542575527.ibm.com in Controller"
* I0922 08:34:37.342022 1 shared_informer.go:247] Caches are synced for endpoint_slice
* I0922 08:34:37.343215 1 shared_informer.go:247] Caches are synced for resource quota
* I0922 08:34:37.368509 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jg4m9"
* I0922 08:34:37.391874 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
* I0922 08:34:37.392258 1 shared_informer.go:247] Caches are synced for endpoint
* I0922 08:34:37.397071 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* E0922 08:34:37.500308 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4ba37db0-c119-4f35-8fd8-02f2d50ef215", ResourceVersion:"223", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736360471, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00091e000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00091e020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00091e040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000d7e940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00091e060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00091e080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00091e0c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0001c0d20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006eb9a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00029e620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000e02338)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0006eb9f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I0922 08:34:37.697290 1 shared_informer.go:247] Caches are synced for garbage collector
* I0922 08:34:37.740598 1 shared_informer.go:247] Caches are synced for garbage collector
* I0922 08:34:37.740626 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0922 08:34:52.315935 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
*
* ==> kube-proxy [8f52419b9c64] <==
* I0922 08:34:40.681888 1 node.go:136] Successfully retrieved node IP: 9.110.168.156
* I0922 08:34:40.681970 1 server_others.go:111] kube-proxy node IP is an IPv4 address (9.110.168.156), assume IPv4 operation
* W0922 08:34:40.799379 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I0922 08:34:40.799653 1 server_others.go:186] Using iptables Proxier.
* W0922 08:34:40.799671 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I0922 08:34:40.799678 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I0922 08:34:40.800110 1 server.go:650] Version: v1.19.2
* I0922 08:34:40.801202 1 conntrack.go:52] Setting nf_conntrack_max to 262144
* I0922 08:34:40.801591 1 config.go:315] Starting service config controller
* I0922 08:34:40.801619 1 config.go:224] Starting endpoint slice config controller
* I0922 08:34:40.801660 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I0922 08:34:40.801624 1 shared_informer.go:240] Waiting for caches to sync for service config
* I0922 08:34:40.901815 1 shared_informer.go:247] Caches are synced for service config
* I0922 08:34:40.901850 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [eb0b1c0f5ce8] <==
* I0922 08:34:21.934845 1 registry.go:173] Registering SelectorSpread plugin
* I0922 08:34:21.934891 1 registry.go:173] Registering SelectorSpread plugin
* I0922 08:34:22.170991 1 serving.go:331] Generated self-signed cert in-memory
* W0922 08:34:25.726414 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0922 08:34:25.726501 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0922 08:34:25.726540 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0922 08:34:25.726584 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0922 08:34:25.737750 1 registry.go:173] Registering SelectorSpread plugin
* I0922 08:34:25.737764 1 registry.go:173] Registering SelectorSpread plugin
* I0922 08:34:25.740147 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0922 08:34:25.740195 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0922 08:34:25.740861 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I0922 08:34:25.741157 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0922 08:34:25.741814 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0922 08:34:25.742632 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0922 08:34:25.742642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0922 08:34:25.742642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0922 08:34:25.742679 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0922 08:34:25.742757 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0922 08:34:25.742806 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0922 08:34:25.742840 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0922 08:34:25.742869 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0922 08:34:25.742879 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E0922 08:34:25.742870 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E0922 08:34:25.742908 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0922 08:34:25.742987 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E0922 08:34:26.567514 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0922 08:34:26.606419 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0922 08:34:26.679672 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E0922 08:34:26.863864 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0922 08:34:26.956130 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E0922 08:34:26.988423 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E0922 08:34:27.027919 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0922 08:34:27.080650 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0922 08:34:27.196696 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0922 08:34:27.217053 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0922 08:34:27.224971 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0922 08:34:27.286717 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0922 08:34:27.337947 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0922 08:34:29.000053 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I0922 08:34:33.440329 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Tue 2020-08-11 02:19:54 CST, end at Tue 2020-09-22 16:41:36 CST. --
* Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:57.827384 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:57.827446 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:40:57 oc2542575527.ibm.com kubelet[32008]: E0922 16:40:57.827477 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:59.830444 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: W0922 16:40:59.830483 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:40:59 oc2542575527.ibm.com kubelet[32008]: E0922 16:40:59.830510 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:01.833544 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:01.833584 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:01 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:01.833612 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:03.831930 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:03.831982 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:03 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:03.832068 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:05.831233 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:05.831278 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:05 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:05.831308 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:07.831378 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:07.831432 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:07 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:07.831460 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:09.828086 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:09.828124 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:09 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:09.828148 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:11.834723 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:11.834772 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:11 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:11.834802 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:13.825937 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:13.825977 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:13 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:13.826007 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:15.838347 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:15.838388 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:15 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:15.838426 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:17.825661 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:17.825700 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:17 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:17.825727 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:19.832538 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:19.832580 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:19 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:19.832608 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:21.827497 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:21.827551 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:21 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:21.827582 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:23.835765 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:23.835818 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:23 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:23.835858 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:25.826185 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:25.826227 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:25 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:25.826255 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:27.832780 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:27.832827 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:27 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:27.832858 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:29.827069 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:29.827129 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:29 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:29.827189 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:31.831257 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:31.831284 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:31 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:31.831313 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:33.822068 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:33.822100 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:33 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:33.822120 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
* Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:35.831895 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: W0922 16:41:35.831933 32008 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b52849dd-220f-455d-aec9-85463ff71db8/volumes" does not exist
* Sep 22 16:41:35 oc2542575527.ibm.com kubelet[32008]: E0922 16:41:35.831961 32008 kubelet_volumes.go:154] orphaned pod "b52849dd-220f-455d-aec9-85463ff71db8" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
*
* ==> storage-provisioner [b4c38d2bad28] <==
* I0922 08:34:58.964710 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
* I0922 08:34:58.971041 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I0922 08:34:58.971109 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be0e2d96-5fda-44c6-8c2c-56da1934baf9", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee became leader
* I0922 08:34:58.971180 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee!
* I0922 08:34:59.071487 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_oc2542575527.ibm.com_dc5554f4-9b91-49c1-9643-d8ed71255bee!
```