kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.36k stars 4.88k forks source link

File copy to VM isn't working anymore on Windows #6565

Closed sraillard closed 4 years ago

sraillard commented 4 years ago

The exact command to reproduce the issue: minikube start

The full output of the command that failed:

No error reported when minikube is starting. I didn't manage to display verbose/debug logs when adding --v=7.

The output of the minikube logs command:

  • ==> Docker <==
  • -- Logs begin at Sun 2020-02-09 15:22:03 UTC, end at Sun 2020-02-09 16:22:02 UTC. --
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.070778300Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.070785000Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.070791200Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.070890400Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.070956400Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071262000Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071297200Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071341400Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071366200Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071375700Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071382500Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071388900Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071395700Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071402500Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071409200Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071415600Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071447700Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071456500Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071463500Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071471100Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071602300Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071658800Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.071691000Z" level=info msg="containerd successfully booted in 0.012932s"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.075872100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.075886200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.075896500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.075901900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.076483100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.076493700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.076501500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.076506600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102168000Z" level=warning msg="Your kernel does not support cgroup blkio weight"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102198300Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102205500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102209600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102213300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102216800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.102320900Z" level=info msg="Loading containers: start."
  • Feb 09 15:22:15 minikube dockerd[2446]: time="2020-02-09T15:22:15.201520800Z" level=info msg="Loading containers: done."
  • Feb 09 15:22:14 minikube dockerd[2446]: time="2020-02-09T15:22:14.748699386Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
  • Feb 09 15:22:14 minikube dockerd[2446]: time="2020-02-09T15:22:14.748784786Z" level=info msg="Daemon has completed initialization"
  • Feb 09 15:22:14 minikube dockerd[2446]: time="2020-02-09T15:22:14.778764786Z" level=info msg="API listen on /var/run/docker.sock"
  • Feb 09 15:22:14 minikube systemd[1]: Started Docker Application Container Engine.
  • Feb 09 15:22:14 minikube dockerd[2446]: time="2020-02-09T15:22:14.778848486Z" level=info msg="API listen on [::]:2376"
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.100724410Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bbc3d97848af3255e127c03812852285de598b42070b4c9beb5efd0a2f3d7bca/shim.sock" debug=false pid=4055
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.106189577Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bc046d7669fe473893cd4ff2642ce75d3b3b30927a286850e2b64782e5f37b3c/shim.sock" debug=false pid=4069
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.119607549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5fa4b650ae8ebda642fa076f1d32460ad805b6edc21f8604182d3a99cbdee4c0/shim.sock" debug=false pid=4089
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.124728724Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/16756ddc58412ac11ebe458c09767ebd2dde3d057d5f26bb7b207f4ff4713a3c/shim.sock" debug=false pid=4099
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.303385861Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/22d8af4f8b0024e98da348de02d9989f95315d69f87e84394bb39e2386b905fb/shim.sock" debug=false pid=4334
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.329697819Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/022bcbf4051107ac7cb0f3221b9863e6741b3c8a73fa66e3e212619f52c109db/shim.sock" debug=false pid=4367
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.332848142Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c2e529b78d8735ce23e8d79379dea483a84f0fff8aabd371e1aecfa16c11504/shim.sock" debug=false pid=4376
  • Feb 09 15:23:18 minikube dockerd[2446]: time="2020-02-09T15:23:18.340200062Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a1faeec091127bad2a535e24688a22b9ff1c308aea19ec85a007247b26220188/shim.sock" debug=false pid=4396
  • Feb 09 15:23:44 minikube dockerd[2446]: time="2020-02-09T15:23:44.934584667Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c20ecdf378a05322fcbd85b850a842790c1b0a358281edaa2f64cfb70c44b03a/shim.sock" debug=false pid=5068
  • Feb 09 15:23:45 minikube dockerd[2446]: time="2020-02-09T15:23:45.105277818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/02c3afd9bb285a2cb0af8c5211acd573f3130e345c597f10ec7242679d05fd6a/shim.sock" debug=false pid=5114
  • Feb 09 15:23:46 minikube dockerd[2446]: time="2020-02-09T15:23:46.123084899Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/da500db7aad4bfdc9a2f37c74df75383e2effa5f321ea97760fa1dfd73293715/shim.sock" debug=false pid=5242
  • Feb 09 15:23:46 minikube dockerd[2446]: time="2020-02-09T15:23:46.281781063Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/57946b280cca1e3a0ba89ff6d15e5fb0cacd00d846279341935ef9d18710266a/shim.sock" debug=false pid=5282
  • Feb 09 15:23:48 minikube dockerd[2446]: time="2020-02-09T15:23:48.015340971Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e6614a19f9a159f6dca03fdf4690cb9bde4c11dc74c959aec58f7df37ddde3ac/shim.sock" debug=false pid=5355
  • Feb 09 15:23:48 minikube dockerd[2446]: time="2020-02-09T15:23:48.019409757Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f238a2dd6f24245006c1b87d3869ee3c972e82c573065907f8aac69fd3b15eb/shim.sock" debug=false pid=5364
  • Feb 09 15:23:48 minikube dockerd[2446]: time="2020-02-09T15:23:48.279627640Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b694277a7d053b96e0cf21dfdc8e73c3c47902cf0082248271787803e093d97c/shim.sock" debug=false pid=5483
  • Feb 09 15:23:48 minikube dockerd[2446]: time="2020-02-09T15:23:48.289742004Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3f923512a9f8332a90b628a1da8f6d9b30ee6cd082d4625c2ef9b9aef69de11d/shim.sock" debug=false pid=5502
  • ==> container status <==
  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
  • 3f923512a9f83 70f311871ae12 2 minutes ago Running coredns 0 e6614a19f9a15
  • b694277a7d053 70f311871ae12 2 minutes ago Running coredns 0 5f238a2dd6f24
  • 57946b280cca1 4689081edb103 2 minutes ago Running storage-provisioner 0 da500db7aad4b
  • 02c3afd9bb285 cba2a99699bdf 2 minutes ago Running kube-proxy 0 c20ecdf378a05
  • a1faeec091127 303ce5db0e90d 3 minutes ago Running etcd 0 16756ddc58412
  • 9c2e529b78d87 da5fd66c4068c 3 minutes ago Running kube-controller-manager 0 5fa4b650ae8eb
  • 022bcbf405110 41ef50a5f06a7 3 minutes ago Running kube-apiserver 0 bc046d7669fe4
  • 22d8af4f8b002 f52d4c527ef2f 3 minutes ago Running kube-scheduler 0 bbc3d97848af3
  • ==> coredns ["3f923512a9f8"] <==
  • .:53
  • [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  • CoreDNS-1.6.5
  • linux/amd64, go1.13.4, c2fd1b2
  • ==> coredns ["b694277a7d05"] <==
  • .:53
  • [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  • CoreDNS-1.6.5
  • linux/amd64, go1.13.4, c2fd1b2
  • ==> dmesg <==
  • [Feb 9 15:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
  • [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
  • [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
  • [ +0.046941] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
  • [ +0.000001] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
  • [ +0.000044] #2 #3
  • [ +0.022317] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
  • [ +0.006675] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
    • this clock source is slow. Consider trying other clock sources
  • [Feb 9 15:22] Unstable clock detected, switching default tracing clock to "global"
  • If you want to keep using the local clock, then add:
  • "trace_clock=local"
  • on the kernel command line
  • [ +0.000039] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
  • [ +0.408454] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
  • [ +0.680971] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
  • [ +0.002245] systemd-fstab-generator[1249]: Ignoring "noauto" for root device
  • [ +0.002567] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
  • [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
  • [ +1.472188] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
  • [ +0.174629] vboxguest: loading out-of-tree module taints kernel.
  • [ +0.002674] vboxguest: PCI device not found, probably running on physical hardware.
  • [ +8.053422] systemd-fstab-generator[2390]: Ignoring "noauto" for root device
  • [ +0.981549] systemd-fstab-generator[2418]: Ignoring "noauto" for root device
  • [ +40.516646] systemd-fstab-generator[3337]: Ignoring "noauto" for root device
  • [ +0.877860] systemd-fstab-generator[3561]: Ignoring "noauto" for root device
  • [Feb 9 15:23] kauditd_printk_skb: 65 callbacks suppressed
  • [ +6.712206] systemd-fstab-generator[4798]: Ignoring "noauto" for root device
  • [ +21.325389] kauditd_printk_skb: 32 callbacks suppressed
  • [ +7.219938] kauditd_printk_skb: 44 callbacks suppressed
  • [Feb 9 15:24] NFSD: Unable to end grace period: -110
  • ==> kernel <==
  • 15:26:32 up 4 min, 0 users, load average: 0.09, 0.17, 0.08
  • Linux minikube 4.19.88 #1 SMP Tue Feb 4 22:25:03 PST 2020 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2019.02.8"
  • ==> kube-apiserver ["022bcbf40511"] <==
  • W0209 15:23:19.922216 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
  • W0209 15:23:19.928317 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
  • W0209 15:23:19.939588 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
  • W0209 15:23:19.941700 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
  • W0209 15:23:19.949731 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
  • W0209 15:23:19.967312 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
  • W0209 15:23:19.967330 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
  • I0209 15:23:19.973790 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
  • I0209 15:23:19.973815 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
  • I0209 15:23:19.974828 1 client.go:361] parsed scheme: "endpoint"
  • I0209 15:23:19.974855 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0209 15:23:19.979591 1 client.go:361] parsed scheme: "endpoint"
  • I0209 15:23:19.979620 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0209 15:23:21.083733 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
  • I0209 15:23:21.083770 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
  • I0209 15:23:21.083785 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
  • I0209 15:23:21.084157 1 secure_serving.go:178] Serving securely on [::]:8443
  • I0209 15:23:21.084190 1 controller.go:81] Starting OpenAPI AggregationController
  • I0209 15:23:21.084219 1 tlsconfig.go:219] Starting DynamicServingCertificateController
  • I0209 15:23:21.084531 1 crd_finalizer.go:263] Starting CRDFinalizer
  • I0209 15:23:21.084627 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
  • I0209 15:23:21.084669 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
  • I0209 15:23:21.084671 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
  • I0209 15:23:21.084652 1 customresource_discovery_controller.go:208] Starting DiscoveryController
  • I0209 15:23:21.084656 1 naming_controller.go:288] Starting NamingConditionController
  • I0209 15:23:21.084660 1 establishing_controller.go:73] Starting EstablishingController
  • I0209 15:23:21.084665 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
  • I0209 15:23:21.084981 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
  • I0209 15:23:21.085022 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
  • I0209 15:23:21.085089 1 available_controller.go:386] Starting AvailableConditionController
  • I0209 15:23:21.085121 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
  • I0209 15:23:21.085171 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
  • I0209 15:23:21.085211 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
  • I0209 15:23:21.084638 1 controller.go:85] Starting OpenAPI controller
  • I0209 15:23:21.086075 1 autoregister_controller.go:140] Starting autoregister controller
  • I0209 15:23:21.086084 1 cache.go:32] Waiting for caches to sync for autoregister controller
  • E0209 15:23:21.091691 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.18.109.141, ResourceVersion: 0, AdditionalErrorMsg:
  • I0209 15:23:21.109117 1 crdregistration_controller.go:111] Starting crd-autoregister controller
  • I0209 15:23:21.109126 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
  • I0209 15:23:21.185737 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
  • I0209 15:23:21.186181 1 cache.go:39] Caches are synced for AvailableConditionController controller
  • I0209 15:23:21.186236 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  • I0209 15:23:21.193358 1 cache.go:39] Caches are synced for autoregister controller
  • I0209 15:23:21.210070 1 shared_informer.go:204] Caches are synced for crd-autoregister
  • I0209 15:23:22.083884 1 controller.go:107] OpenAPI AggregationController: Processing item
  • I0209 15:23:22.083908 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  • I0209 15:23:22.083968 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  • I0209 15:23:22.086972 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
  • I0209 15:23:22.090176 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
  • I0209 15:23:22.090198 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
  • I0209 15:23:22.349721 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  • I0209 15:23:22.377055 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  • W0209 15:23:22.473430 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.109.141]
  • I0209 15:23:22.473762 1 controller.go:606] quota admission added evaluator for: endpoints
  • I0209 15:23:23.225992 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
  • I0209 15:23:23.958883 1 controller.go:606] quota admission added evaluator for: serviceaccounts
  • I0209 15:23:23.967803 1 controller.go:606] quota admission added evaluator for: deployments.apps
  • I0209 15:23:24.235451 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
  • I0209 15:23:32.309375 1 controller.go:606] quota admission added evaluator for: replicasets.apps
  • I0209 15:23:32.348526 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
  • ==> kube-controller-manager ["9c2e529b78d8"] <==
  • I0209 15:23:30.499920 1 resource_quota_monitor.go:303] QuotaMonitor running
  • I0209 15:23:30.507894 1 controllermanager.go:533] Started "deployment"
  • W0209 15:23:30.507919 1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
  • I0209 15:23:30.507949 1 deployment_controller.go:152] Starting deployment controller
  • I0209 15:23:30.507953 1 shared_informer.go:197] Waiting for caches to sync for deployment
  • I0209 15:23:31.199195 1 controllermanager.go:533] Started "horizontalpodautoscaling"
  • I0209 15:23:31.199279 1 horizontal.go:156] Starting HPA controller
  • I0209 15:23:31.199288 1 shared_informer.go:197] Waiting for caches to sync for HPA
  • I0209 15:23:31.449726 1 controllermanager.go:533] Started "ttl"
  • I0209 15:23:31.449808 1 ttl_controller.go:116] Starting TTL controller
  • I0209 15:23:31.449818 1 shared_informer.go:197] Waiting for caches to sync for TTL
  • I0209 15:23:31.698448 1 controllermanager.go:533] Started "bootstrapsigner"
  • I0209 15:23:31.698489 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer
  • I0209 15:23:31.952827 1 controllermanager.go:533] Started "serviceaccount"
  • W0209 15:23:31.952953 1 controllermanager.go:525] Skipping "nodeipam"
  • I0209 15:23:31.953498 1 serviceaccounts_controller.go:116] Starting service account controller
  • I0209 15:23:31.953561 1 shared_informer.go:197] Waiting for caches to sync for service account
  • I0209 15:23:31.953685 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
  • W0209 15:23:31.960056 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
  • I0209 15:23:31.998629 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
  • I0209 15:23:31.998633 1 shared_informer.go:204] Caches are synced for bootstrap_signer
  • I0209 15:23:31.999164 1 shared_informer.go:204] Caches are synced for expand
  • I0209 15:23:32.030788 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
  • I0209 15:23:32.048893 1 shared_informer.go:204] Caches are synced for PV protection
  • I0209 15:23:32.050507 1 shared_informer.go:204] Caches are synced for TTL
  • I0209 15:23:32.053741 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
  • I0209 15:23:32.252423 1 shared_informer.go:197] Waiting for caches to sync for resource quota
  • I0209 15:23:32.267774 1 shared_informer.go:204] Caches are synced for PVC protection
  • I0209 15:23:32.298870 1 shared_informer.go:204] Caches are synced for taint
  • I0209 15:23:32.298870 1 shared_informer.go:204] Caches are synced for ReplicaSet
  • I0209 15:23:32.298939 1 taint_manager.go:186] Starting NoExecuteTaintManager
  • I0209 15:23:32.298958 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
  • W0209 15:23:32.299011 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
  • I0209 15:23:32.299015 1 shared_informer.go:204] Caches are synced for attach detach
  • I0209 15:23:32.299075 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
  • I0209 15:23:32.299177 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0bd76262-05b0-4350-be09-cb6fe95d5917", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
  • I0209 15:23:32.299255 1 shared_informer.go:204] Caches are synced for endpoint
  • I0209 15:23:32.301874 1 shared_informer.go:204] Caches are synced for disruption
  • I0209 15:23:32.301888 1 disruption.go:338] Sending events to api server.
  • I0209 15:23:32.303116 1 shared_informer.go:204] Caches are synced for ReplicationController
  • I0209 15:23:32.308185 1 shared_informer.go:204] Caches are synced for deployment
  • I0209 15:23:32.308215 1 shared_informer.go:204] Caches are synced for job
  • I0209 15:23:32.311175 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"329e20c5-e656-4310-b781-1aba8093fb09", APIVersion:"apps/v1", ResourceVersion:"180", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
  • I0209 15:23:32.321513 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"1c98f2f1-7906-4c98-8fd0-51f1ef943e02", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-wprg2
  • I0209 15:23:32.329100 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"1c98f2f1-7906-4c98-8fd0-51f1ef943e02", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-2cfq5
  • I0209 15:23:32.346845 1 shared_informer.go:204] Caches are synced for daemon sets
  • I0209 15:23:32.348899 1 shared_informer.go:204] Caches are synced for stateful set
  • I0209 15:23:32.349483 1 shared_informer.go:204] Caches are synced for GC
  • I0209 15:23:32.351331 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7610c387-048f-471f-bb42-e89e44c143c0", APIVersion:"apps/v1", ResourceVersion:"185", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tkmbk
  • I0209 15:23:32.357681 1 shared_informer.go:204] Caches are synced for persistent volume
  • E0209 15:23:32.364751 1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"7610c387-048f-471f-bb42-e89e44c143c0", ResourceVersion:"185", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716858604, loc:(time.Location)(0x6b971e0)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc00046c880), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(nil), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(0xc00094f3c0), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc00046ca00), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc00046cbe0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(v1.EnvVarSource)(0xc00046d0c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), StartupProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc000ba0730), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc000528dd8), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc000efe960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil), PreemptionPolicy:(v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(v1.RollingUpdateDaemonSet)(0xc00000e6c0)}, MinReadySeconds:0, RevisionHistoryLimit:(int32)(0xc000528e58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
  • I0209 15:23:32.399450 1 shared_informer.go:204] Caches are synced for HPA
  • I0209 15:23:32.552906 1 shared_informer.go:204] Caches are synced for resource quota
  • I0209 15:23:32.552931 1 shared_informer.go:204] Caches are synced for namespace
  • I0209 15:23:32.553917 1 shared_informer.go:204] Caches are synced for service account
  • I0209 15:23:32.553990 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0209 15:23:32.597549 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0209 15:23:32.597595 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
  • I0209 15:23:32.600066 1 shared_informer.go:204] Caches are synced for resource quota
  • I0209 15:23:47.300428 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
  • ==> kube-proxy ["02c3afd9bb28"] <==
  • W0209 15:23:45.210525 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
  • I0209 15:23:45.214518 1 node.go:135] Successfully retrieved node IP: 172.18.109.141
  • I0209 15:23:45.214545 1 server_others.go:145] Using iptables Proxier.
  • W0209 15:23:45.214634 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
  • I0209 15:23:45.214787 1 server.go:571] Version: v1.17.2
  • I0209 15:23:45.214994 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
  • I0209 15:23:45.215013 1 conntrack.go:52] Setting nf_conntrack_max to 131072
  • I0209 15:23:45.215440 1 conntrack.go:83] Setting conntrack hashsize to 32768
  • I0209 15:23:45.221130 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
  • I0209 15:23:45.221190 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
  • I0209 15:23:45.221287 1 config.go:313] Starting service config controller
  • I0209 15:23:45.221293 1 shared_informer.go:197] Waiting for caches to sync for service config
  • I0209 15:23:45.221314 1 config.go:131] Starting endpoints config controller
  • I0209 15:23:45.221335 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
  • I0209 15:23:45.321702 1 shared_informer.go:204] Caches are synced for endpoints config
  • I0209 15:23:45.321705 1 shared_informer.go:204] Caches are synced for service config
  • ==> kube-scheduler ["22d8af4f8b00"] <==
  • I0209 15:23:18.670592 1 serving.go:312] Generated self-signed cert in-memory
  • W0209 15:23:18.772671 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
  • W0209 15:23:18.772931 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
  • W0209 15:23:21.112588 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
  • W0209 15:23:21.112729 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
  • W0209 15:23:21.112783 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
  • W0209 15:23:21.112845 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
  • W0209 15:23:21.120507 1 authorization.go:47] Authorization is disabled
  • W0209 15:23:21.120550 1 authentication.go:92] Authentication is disabled
  • I0209 15:23:21.120569 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
  • I0209 15:23:21.121504 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0209 15:23:21.121514 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0209 15:23:21.121674 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
  • I0209 15:23:21.121719 1 tlsconfig.go:219] Starting DynamicServingCertificateController
  • E0209 15:23:21.123202 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0209 15:23:21.123205 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
  • E0209 15:23:21.123257 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0209 15:23:21.123279 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0209 15:23:21.123292 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0209 15:23:21.123335 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0209 15:23:21.123356 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0209 15:23:21.123406 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0209 15:23:21.123412 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0209 15:23:21.123445 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0209 15:23:21.123479 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0209 15:23:21.123508 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0209 15:23:22.123944 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0209 15:23:22.125658 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
  • E0209 15:23:22.126717 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0209 15:23:22.127905 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0209 15:23:22.129170 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0209 15:23:22.130373 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0209 15:23:22.131346 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0209 15:23:22.132586 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0209 15:23:22.133856 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0209 15:23:22.134929 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0209 15:23:22.135943 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0209 15:23:22.136970 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • I0209 15:23:23.221808 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0209 15:23:23.221910 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
  • I0209 15:23:23.227348 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
  • E0209 15:23:32.336961 1 factory.go:494] pod is already present in the activeQ
  • ==> kubelet <==
  • -- Logs begin at Sun 2020-02-09 15:22:03 UTC, end at Sun 2020-02-09 16:22:02 UTC. --
  • Feb 09 15:23:24 minikube kubelet[4807]: W0209 15:23:24.059376 4807 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.060087 4807 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.068436 4807 docker_service.go:260] Docker Info: &{ID:WGJF:QQZY:GQVU:4QER:IZUQ:5XPQ:D4JK:DXTO:LBGI:575C:U3YU:3SMB Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2020-02-09T15:23:24.060737784Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.19.88 OperatingSystem:Buildroot 2019.02.8 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00072e070 NCPU:4 MemTotal:4131684352 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:19.03.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:d736ef14f0288d6993a1845745d6756cfc9ddd5a Expected:d736ef14f0288d6993a1845745d6756cfc9ddd5a} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.068498 4807 docker_service.go:273] Setting cgroupDriver to cgroupfs
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075778 4807 remote_runtime.go:59] parsed scheme: ""
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075802 4807 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075818 4807 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075823 4807 clientconn.go:577] ClientConn switching balancer to "pick_first"
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075854 4807 remote_image.go:50] parsed scheme: ""
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075860 4807 remote_image.go:50] scheme "" not registered, fallback to default scheme
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075866 4807 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
  • Feb 09 15:23:24 minikube kubelet[4807]: I0209 15:23:24.075870 4807 clientconn.go:577] ClientConn switching balancer to "pick_first"
  • Feb 09 15:23:44 minikube kubelet[4807]: E0209 15:23:44.383561 4807 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
  • Feb 09 15:23:44 minikube kubelet[4807]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.392129 4807 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.5, apiVersion: 1.40.0
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.400992 4807 server.go:1113] Started kubelet
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.401041 4807 server.go:143] Starting to listen on 0.0.0.0:10250
  • Feb 09 15:23:44 minikube kubelet[4807]: E0209 15:23:44.401049 4807 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.401546 4807 server.go:354] Adding debug handlers to kubelet server.
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.401826 4807 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.403047 4807 volume_manager.go:265] Starting Kubelet Volume Manager
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.403180 4807 desired_state_of_world_populator.go:138] Desired state populator starts to run
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.410733 4807 status_manager.go:157] Starting to sync pod status with apiserver
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.410760 4807 kubelet.go:1820] Starting kubelet main sync loop.
  • Feb 09 15:23:44 minikube kubelet[4807]: E0209 15:23:44.410816 4807 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.477297 4807 cpu_manager.go:173] [cpumanager] starting with none policy
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.477320 4807 cpu_manager.go:174] [cpumanager] reconciling every 10s
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.477329 4807 policy_none.go:43] [cpumanager] none policy: Start
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.478181 4807 plugin_manager.go:114] Starting Kubelet Plugin Manager
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.503276 4807 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.527413 4807 kubelet_node_status.go:70] Attempting to register node minikube
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704211 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704446 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/aec7f878-f624-400e-b3ac-7879ab47ec0c-xtables-lock") pod "kube-proxy-tkmbk" (UID: "aec7f878-f624-400e-b3ac-7879ab47ec0c")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704519 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-92jtn" (UniqueName: "kubernetes.io/secret/aec7f878-f624-400e-b3ac-7879ab47ec0c-kube-proxy-token-92jtn") pod "kube-proxy-tkmbk" (UID: "aec7f878-f624-400e-b3ac-7879ab47ec0c")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704592 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c1b171d918aa89531bd5657acb065f84-ca-certs") pod "kube-apiserver-minikube" (UID: "c1b171d918aa89531bd5657acb065f84")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704661 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c1b171d918aa89531bd5657acb065f84-k8s-certs") pod "kube-apiserver-minikube" (UID: "c1b171d918aa89531bd5657acb065f84")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.704803 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c1b171d918aa89531bd5657acb065f84-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "c1b171d918aa89531bd5657acb065f84")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705005 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-kubeconfig") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705132 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9c994ea62a2d8d6f1bb7498f10aa6fcf-kubeconfig") pod "kube-scheduler-minikube" (UID: "9c994ea62a2d8d6f1bb7498f10aa6fcf")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705208 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/aec7f878-f624-400e-b3ac-7879ab47ec0c-kube-proxy") pod "kube-proxy-tkmbk" (UID: "aec7f878-f624-400e-b3ac-7879ab47ec0c")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705283 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/aec7f878-f624-400e-b3ac-7879ab47ec0c-lib-modules") pod "kube-proxy-tkmbk" (UID: "aec7f878-f624-400e-b3ac-7879ab47ec0c")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705340 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/8df3b32a9a938b2ec4d0d2782b34f10e-etcd-certs") pod "etcd-minikube" (UID: "8df3b32a9a938b2ec4d0d2782b34f10e")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705395 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-ca-certs") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705451 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705665 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/8df3b32a9a938b2ec4d0d2782b34f10e-etcd-data") pod "etcd-minikube" (UID: "8df3b32a9a938b2ec4d0d2782b34f10e")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.705942 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/0ae6cf402f641e9b795a3aebca394220-k8s-certs") pod "kube-controller-manager-minikube" (UID: "0ae6cf402f641e9b795a3aebca394220")
  • Feb 09 15:23:44 minikube kubelet[4807]: I0209 15:23:44.706094 4807 reconciler.go:156] Reconciler: start to sync state
  • Feb 09 15:23:45 minikube kubelet[4807]: I0209 15:23:45.394000 4807 kubelet_node_status.go:112] Node minikube was previously registered
  • Feb 09 15:23:45 minikube kubelet[4807]: I0209 15:23:45.394079 4807 kubelet_node_status.go:73] Successfully registered node minikube
  • Feb 09 15:23:45 minikube kubelet[4807]: I0209 15:23:45.508553 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-dprnq" (UniqueName: "kubernetes.io/secret/20756ee1-7bca-4a20-b7bb-f46230f3c719-storage-provisioner-token-dprnq") pod "storage-provisioner" (UID: "20756ee1-7bca-4a20-b7bb-f46230f3c719")
  • Feb 09 15:23:45 minikube kubelet[4807]: I0209 15:23:45.508614 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/20756ee1-7bca-4a20-b7bb-f46230f3c719-tmp") pod "storage-provisioner" (UID: "20756ee1-7bca-4a20-b7bb-f46230f3c719")
  • Feb 09 15:23:46 minikube kubelet[4807]: E0209 15:23:46.008320 4807 kubelet.go:1662] Failed creating a mirror pod for "kube-scheduler-minikube_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf)": pods "kube-scheduler-minikube" already exists
  • Feb 09 15:23:47 minikube kubelet[4807]: I0209 15:23:47.214091 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-gc9r4" (UniqueName: "kubernetes.io/secret/61df07f7-738e-4543-9398-a144fac001b5-coredns-token-gc9r4") pod "coredns-6955765f44-wprg2" (UID: "61df07f7-738e-4543-9398-a144fac001b5")
  • Feb 09 15:23:47 minikube kubelet[4807]: I0209 15:23:47.214457 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/61df07f7-738e-4543-9398-a144fac001b5-config-volume") pod "coredns-6955765f44-wprg2" (UID: "61df07f7-738e-4543-9398-a144fac001b5")
  • Feb 09 15:23:47 minikube kubelet[4807]: I0209 15:23:47.214492 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-gc9r4" (UniqueName: "kubernetes.io/secret/c17abad8-a59d-46e3-9dd1-fbca191e2416-coredns-token-gc9r4") pod "coredns-6955765f44-2cfq5" (UID: "c17abad8-a59d-46e3-9dd1-fbca191e2416")
  • Feb 09 15:23:47 minikube kubelet[4807]: I0209 15:23:47.214509 4807 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c17abad8-a59d-46e3-9dd1-fbca191e2416-config-volume") pod "coredns-6955765f44-2cfq5" (UID: "c17abad8-a59d-46e3-9dd1-fbca191e2416")
  • Feb 09 15:23:48 minikube kubelet[4807]: W0209 15:23:48.189840 4807 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-2cfq5 through plugin: invalid network status for
  • Feb 09 15:23:48 minikube kubelet[4807]: W0209 15:23:48.195778 4807 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-wprg2 through plugin: invalid network status for
  • Feb 09 15:23:48 minikube kubelet[4807]: W0209 15:23:48.447004 4807 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-2cfq5 through plugin: invalid network status for
  • Feb 09 15:23:48 minikube kubelet[4807]: W0209 15:23:48.450245 4807 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-wprg2 through plugin: invalid network status for
  • ==> storage-provisioner ["57946b280cca"] <==

The operating system version:

Windows 10 Pro (1909) with Hyper-V

When using minikube version 1.5.2 or version 1.6.2, it is working fine: the file C:\Users\Sebastien\.minikube\files\etc\hosts is correctly copied in the VM as /etc/hosts after minikube has started. But it doesnt work anymore with minikube version 1.7.2.

Two related questions:

tstromberg commented 4 years ago

@sraillard - minikube start --alsologtostderr -v=1 output would be very helpful. As a regression, I'd like to see this bug get fixed ASAP.

I suspect I broke this feature, but I'm surprised that the integration tests did not find this.

sraillard commented 4 years ago

Here are the logs when starting minikube 1.7.2 and also when starting minikube 1.6.2.

The differences I can see are that these lines in the 1.6.2 logs are not in the 1.7.2 logs:

I0211 22:54:59.378562   16560 ssh_runner.go:156] Checked if /etc/hosts exists, but got error: source file and destination file are different sizes
I0211 22:54:59.379603   16560 ssh_runner.go:175] Transferring 115 bytes to /etc/hosts
I0211 22:54:59.379603   16560 ssh_runner.go:194] hosts: copied 115 bytes

It's looking like the copy of the files isn't done.

log-start-1.6.2.txt log-start-1.7.2.txt

tstromberg commented 4 years ago

Looks like a filepath confusion bug:

I0211 22:59:05.951469    5180 ssh_runner.go:155] Checked if /etc\hosts exists, but got error: Process exited with status 1
I0211 22:59:05.951809    5180 ssh_runner.go:174] Transferring 115 bytes to /etc\hosts
I0211 22:59:05.951809    5180 ssh_runner.go:193] etc\hosts: copied 115 bytes
sraillard commented 4 years ago

Ah yes, I didn't see it! Some mismatch between Linux/Windows path separator...

tstromberg commented 4 years ago

@sraillard - Do you mind confirming that the PR I wrote fixes your issue? Here is an updated Windows executable containing the fix:

https://storage.googleapis.com/minikube-builds/6605/minikube-windows-amd64.exe

If not, please share the --alsologtostderr output it provides.

sraillard commented 4 years ago

I have tested and now the file copy is working as expected, thank you!

medyagh commented 4 years ago

done by PR