kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.17k stars 4.87k forks source link

VirtualBox macOS: minikube is unable to connect to the VM #4540

Closed kmayura1 closed 4 years ago

kmayura1 commented 5 years ago

I restarted my macOS Majove, Had reinstalled docker after couple of minikube start hangings, followed few steps everytime starting minikube but still issue, tried with other versions too. rm -rf ~/.minikube rm -rf ~/.kube brew cask reinstall virtualbox brew reinstall kubernetes-cli curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.1/minikube-darwin-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

minikube start šŸ˜„ minikube v1.1.1 on darwin (amd64) šŸ”„ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... šŸ³ Configuring environment for Kubernetes v1.14.3 on Docker 18.09.6 šŸ’¾ Downloading kubeadm v1.14.3 šŸ’¾ Downloading kubelet v1.14.3 šŸšœ Pulling images ... šŸš€ Launching Kubernetes ...

Minikube logs ::::

I0620 12:52:39.138524 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... I0620 12:52:39.146724 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <== -- Logs begin at Thu 2019-06-20 12:50:53 UTC, end at Thu 2019-06-20 12:55:34 UTC. -- Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.146561 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.247011 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.348321 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.448682 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.548808 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.649491 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.749727 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.850310 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: E0620 12:52:33.950992 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:33 minikube kubelet[3230]: I0620 12:52:33.996984 3230 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jun 20 12:52:33 minikube kubelet[3230]: I0620 12:52:33.997464 3230 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jun 20 12:52:33 minikube kubelet[3230]: I0620 12:52:33.997814 3230 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jun 20 12:52:34 minikube kubelet[3230]: I0620 12:52:34.004385 3230 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.051474 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.152419 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.253045 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.353602 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.456007 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.557146 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.657687 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.758119 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.858423 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:34 minikube kubelet[3230]: E0620 12:52:34.958695 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.059076 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.159999 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.261426 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.362034 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.463033 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.563441 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.663668 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.764994 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.866000 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:35 minikube kubelet[3230]: E0620 12:52:35.966534 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:36 minikube kubelet[3230]: E0620 12:52:36.067169 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:36 minikube kubelet[3230]: E0620 12:52:36.167937 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:36 minikube kubelet[3230]: E0620 12:52:36.264483 3230 controller.go:194] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found Jun 20 12:52:36 minikube kubelet[3230]: I0620 12:52:36.266752 3230 reconciler.go:154] Reconciler: start to sync state Jun 20 12:52:36 minikube kubelet[3230]: E0620 12:52:36.268165 3230 kubelet.go:2244] node "minikube" not found Jun 20 12:52:36 minikube kubelet[3230]: I0620 12:52:36.287922 3230 kubelet_node_status.go:75] Successfully registered node minikube Jun 20 12:52:36 minikube kubelet[3230]: E0620 12:52:36.345166 3230 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: namespaces "kube-node-lease" not found Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.112171 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/4d96ec6e-935a-11e9-b941-080027ad4350-lib-modules") pod "kube-proxy-j5gx8" (UID: "4d96ec6e-935a-11e9-b941-080027ad4350") Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.112583 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-4rtnv" (UniqueName: "kubernetes.io/secret/4d96ec6e-935a-11e9-b941-080027ad4350-kube-proxy-token-4rtnv") pod "kube-proxy-j5gx8" (UID: "4d96ec6e-935a-11e9-b941-080027ad4350") Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.112759 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4d96ec6e-935a-11e9-b941-080027ad4350-kube-proxy") pod "kube-proxy-j5gx8" (UID: "4d96ec6e-935a-11e9-b941-080027ad4350") Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.112811 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/4d96ec6e-935a-11e9-b941-080027ad4350-xtables-lock") pod "kube-proxy-j5gx8" (UID: "4d96ec6e-935a-11e9-b941-080027ad4350") Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.919513 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4e1a7293-935a-11e9-b941-080027ad4350-tmp") pod "storage-provisioner" (UID: "4e1a7293-935a-11e9-b941-080027ad4350") Jun 20 12:52:47 minikube kubelet[3230]: I0620 12:52:47.919638 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-zrl9x" (UniqueName: "kubernetes.io/secret/4e1a7293-935a-11e9-b941-080027ad4350-storage-provisioner-token-zrl9x") pod "storage-provisioner" (UID: "4e1a7293-935a-11e9-b941-080027ad4350") Jun 20 12:52:48 minikube kubelet[3230]: I0620 12:52:48.020147 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4d8460c8-935a-11e9-b941-080027ad4350-config-volume") pod "coredns-fb8b8dccf-sgz9v" (UID: "4d8460c8-935a-11e9-b941-080027ad4350") Jun 20 12:52:48 minikube kubelet[3230]: I0620 12:52:48.020191 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4d81a4f1-935a-11e9-b941-080027ad4350-config-volume") pod "coredns-fb8b8dccf-jf5wq" (UID: "4d81a4f1-935a-11e9-b941-080027ad4350") Jun 20 12:52:48 minikube kubelet[3230]: I0620 12:52:48.020211 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-2ggqz" (UniqueName: "kubernetes.io/secret/4d81a4f1-935a-11e9-b941-080027ad4350-coredns-token-2ggqz") pod "coredns-fb8b8dccf-jf5wq" (UID: "4d81a4f1-935a-11e9-b941-080027ad4350") Jun 20 12:52:48 minikube kubelet[3230]: I0620 12:52:48.020280 3230 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-2ggqz" (UniqueName: "kubernetes.io/secret/4d8460c8-935a-11e9-b941-080027ad4350-coredns-token-2ggqz") pod "coredns-fb8b8dccf-sgz9v" (UID: "4d8460c8-935a-11e9-b941-080027ad4350")

sharifelgamal commented 5 years ago

Hi there, could you try the start command, but with minikube start -v 8 --alsologtostderr? Those logs will be a lot more granular and hopefully more helpful for figuring out what's happening.

kmayura1 commented 5 years ago

Hi Thank you fot the reply, i got these logs from above command:

I0621 06:29:49.893954    8194 utils.go:240] > Jun 21 05:23:29 minikube kubelet[3217]: I0621 05:23:29.601737    3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-wrx72" (UniqueName: "kubernetes.io/secret/b4135fb1-93e4-11e9-95b4-080027d4e105-storage-provisioner-token-wrx72") pod "storage-provisioner" (UID: "b4135fb1-93e4-11e9-95b4-080027d4e105")
I0621 06:29:49.893989    8194 utils.go:240] > Jun 21 05:23:29 minikube kubelet[3217]: I0621 05:23:29.602080    3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/b4135fb1-93e4-11e9-95b4-080027d4e105-tmp") pod "storage-provisioner" (UID: "b4135fb1-93e4-11e9-95b4-080027d4e105")
I0621 06:29:49.894019    8194 utils.go:240] > Jun 21 05:23:29 minikube kubelet[3217]: W0621 05:23:29.813314    3217 container.go:409] Failed to create summary reader for "/system.slice/run-r308ba0b90998406681f0af4efc27f199.scope": none of the resources are being tracked.
I0621 06:29:49.901227    8194 logs.go:76] Gathering logs for dmesg ...
I0621 06:29:49.901249    8194 ssh_runner.go:137] Run with output: sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 200
I0621 06:29:49.908995    8194 utils.go:240] > [Jun21 05:21] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
I0621 06:29:49.909052    8194 utils.go:240] > [ +16.870238] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
I0621 06:29:49.909080    8194 utils.go:240] > [Jun21 05:22] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
I0621 06:29:49.909112    8194 utils.go:240] > [  +0.002403] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
I0621 06:29:49.909136    8194 utils.go:240] > [  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
I0621 06:29:49.909164    8194 utils.go:240] > [  +0.301947] vboxguest: loading out-of-tree module taints kernel.
I0621 06:29:49.909182    8194 utils.go:240] > [  +0.018997] vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds
I0621 06:29:49.909202    8194 utils.go:240] > [  +0.000181] vboxguest: misc device minor 57, IRQ 20, I/O port d020, MMIO at 00000000f0000000 (size 0x400000)
I0621 06:29:49.909220    8194 utils.go:240] > [  +0.000164] vboxvideo: Unknown symbol ttm_bo_mmap (err 0)
I0621 06:29:49.909231    8194 utils.go:240] > [  +0.000012] vboxvideo: Unknown symbol ttm_bo_global_release (err 0)
I0621 06:29:49.909240    8194 utils.go:240] > [  +0.000005] vboxvideo: Unknown symbol ttm_pool_unpopulate (err 0)
I0621 06:29:49.909250    8194 utils.go:240] > [  +0.000004] vboxvideo: Unknown symbol ttm_bo_manager_func (err 0)
I0621 06:29:49.909259    8194 utils.go:240] > [  +0.000003] vboxvideo: Unknown symbol ttm_bo_global_init (err 0)
I0621 06:29:49.909268    8194 utils.go:240] > [  +0.000001] vboxvideo: Unknown symbol ttm_bo_default_io_mem_pfn (err 0)
I0621 06:29:49.909278    8194 utils.go:240] > [  +0.000006] vboxvideo: Unknown symbol ttm_bo_device_release (err 0)
I0621 06:29:49.909287    8194 utils.go:240] > [  +0.000013] vboxvideo: Unknown symbol ttm_bo_kunmap (err 0)
I0621 06:29:49.909298    8194 utils.go:240] > [  +0.000003] vboxvideo: Unknown symbol ttm_bo_del_sub_from_lru (err 0)
I0621 06:29:49.909385    8194 utils.go:240] > [  +0.000007] vboxvideo: Unknown symbol ttm_bo_device_init (err 0)
I0621 06:29:49.909401    8194 utils.go:240] > [  +0.000001] vboxvideo: Unknown symbol ttm_bo_init_mm (err 0)
I0621 06:29:49.909416    8194 utils.go:240] > [  +0.000001] vboxvideo: Unknown symbol ttm_bo_dma_acc_size (err 0)
I0621 06:29:49.909426    8194 utils.go:240] > [  +0.000004] vboxvideo: Unknown symbol ttm_tt_init (err 0)
I0621 06:29:49.909441    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_bo_kmap (err 0)
I0621 06:29:49.909452    8194 utils.go:240] > [  +0.000007] vboxvideo: Unknown symbol ttm_bo_add_to_lru (err 0)
I0621 06:29:49.909462    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_bo_unref (err 0)
I0621 06:29:49.909474    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_mem_global_release (err 0)
I0621 06:29:49.909494    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_mem_global_init (err 0)
I0621 06:29:49.909504    8194 utils.go:240] > [  +0.000010] vboxvideo: Unknown symbol ttm_bo_init (err 0)
I0621 06:29:49.909513    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_bo_validate (err 0)
I0621 06:29:49.909522    8194 utils.go:240] > [  +0.000008] vboxvideo: Unknown symbol ttm_tt_fini (err 0)
I0621 06:29:49.909533    8194 utils.go:240] > [  +0.000016] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err 0)
I0621 06:29:49.909543    8194 utils.go:240] > [  +0.000002] vboxvideo: Unknown symbol ttm_pool_populate (err 0)
I0621 06:29:49.909551    8194 utils.go:240] > [  +0.170016] hpet1: lost 629 rtc interrupts
I0621 06:29:49.909565    8194 utils.go:240] > [  +0.011634] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
I0621 06:29:49.909578    8194 utils.go:240] > [  +0.043257] VBoxService 5.1.38 r122592 (verbosity: 0) linux.amd64 (May  9 2018 12:22:30) release log
I0621 06:29:49.909589    8194 utils.go:240] >               00:00:00.008594 main     Log opened 2019-06-21T05:22:12.774537000Z
I0621 06:29:49.909599    8194 utils.go:240] > [  +0.001315] 00:00:00.009908 main     OS Product: Linux
I0621 06:29:49.909608    8194 utils.go:240] > [  +0.000155] 00:00:00.010100 main     OS Release: 4.15.0
I0621 06:29:49.909619    8194 utils.go:240] > [  +0.000075] 00:00:00.010179 main     OS Version: #1 SMP Thu Jun 6 15:07:18 PDT 2019
I0621 06:29:49.909630    8194 utils.go:240] > [  +0.000093] 00:00:00.010268 main     Executable: /usr/sbin/VBoxService
I0621 06:29:49.909638    8194 utils.go:240] >               00:00:00.010268 main     Process ID: 2057
I0621 06:29:49.909648    8194 utils.go:240] >               00:00:00.010269 main     Package type: LINUX_64BITS_GENERIC
I0621 06:29:49.909659    8194 utils.go:240] > [  +0.000078] 00:00:00.010357 main     5.1.38 r122592 started. Verbose level = 0
I0621 06:29:49.909671    8194 utils.go:240] > [  +5.014631] hpet1: lost 282 rtc interrupts
I0621 06:29:49.909680    8194 utils.go:240] > [  +5.003479] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909695    8194 utils.go:240] > [  +1.810172] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
I0621 06:29:49.909705    8194 utils.go:240] > [  +3.230159] hpet_rtc_timer_reinit: 69 callbacks suppressed
I0621 06:29:49.909713    8194 utils.go:240] > [  +0.000001] hpet1: lost 320 rtc interrupts
I0621 06:29:49.909720    8194 utils.go:240] > [  +5.023070] hpet1: lost 320 rtc interrupts
I0621 06:29:49.909727    8194 utils.go:240] > [  +5.001568] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909735    8194 utils.go:240] > [  +5.001352] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909742    8194 utils.go:240] > [  +5.000639] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909750    8194 utils.go:240] > [  +5.000681] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909761    8194 utils.go:240] > [  +4.821036] systemd-fstab-generator[2970]: Ignoring "noauto" for root device
I0621 06:29:49.909768    8194 utils.go:240] > [  +0.180097] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909776    8194 utils.go:240] > [  +5.001034] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909786    8194 utils.go:240] > [Jun21 05:23] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
I0621 06:29:49.909794    8194 utils.go:240] > [  +1.106947] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909805    8194 utils.go:240] > [  +5.002379] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909815    8194 utils.go:240] > [  +5.006720] hpet_rtc_timer_reinit: 21 callbacks suppressed
I0621 06:29:49.909823    8194 utils.go:240] > [  +0.000001] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909831    8194 utils.go:240] > [  +5.001219] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909838    8194 utils.go:240] > [  +5.001371] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909846    8194 utils.go:240] > [  +5.000895] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909855    8194 utils.go:240] > [ +10.001922] hpet_rtc_timer_reinit: 36 callbacks suppressed
I0621 06:29:49.909863    8194 utils.go:240] > [  +0.000010] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909870    8194 utils.go:240] > [  +5.001404] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909877    8194 utils.go:240] > [  +5.002824] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909886    8194 utils.go:240] > [  +5.000417] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909893    8194 utils.go:240] > [  +5.001004] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909900    8194 utils.go:240] > [Jun21 05:24] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909907    8194 utils.go:240] > [  +5.002121] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909920    8194 utils.go:240] > [  +3.052271] NFSD: Unable to end grace period: -110
I0621 06:29:49.909928    8194 utils.go:240] > [  +1.952793] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909935    8194 utils.go:240] > [  +5.011618] hpet1: lost 320 rtc interrupts
I0621 06:29:49.909943    8194 utils.go:240] > [  +5.004769] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909951    8194 utils.go:240] > [  +5.007508] hpet1: lost 320 rtc interrupts
I0621 06:29:49.909958    8194 utils.go:240] > [  +5.004808] hpet1: lost 318 rtc interrupts
I0621 06:29:49.909965    8194 utils.go:240] > [  +5.000251] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909973    8194 utils.go:240] > [  +5.001911] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909981    8194 utils.go:240] > [  +5.004280] hpet1: lost 319 rtc interrupts
I0621 06:29:49.909994    8194 utils.go:240] > [  +5.000114] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910002    8194 utils.go:240] > [  +5.002489] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910009    8194 utils.go:240] > [Jun21 05:25] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910016    8194 utils.go:240] > [  +5.002654] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910023    8194 utils.go:240] > [  +5.002400] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910030    8194 utils.go:240] > [  +5.002743] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910040    8194 utils.go:240] > [  +5.002692] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910047    8194 utils.go:240] > [  +5.002870] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910054    8194 utils.go:240] > [  +5.002656] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910062    8194 utils.go:240] > [  +5.006071] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910069    8194 utils.go:240] > [  +5.008863] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910423    8194 utils.go:240] > [  +5.004545] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910436    8194 utils.go:240] > [  +5.006925] hpet1: lost 320 rtc interrupts
I0621 06:29:49.910443    8194 utils.go:240] > [  +5.008160] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910451    8194 utils.go:240] > [Jun21 05:26] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910458    8194 utils.go:240] > [  +5.007660] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910466    8194 utils.go:240] > [  +5.006719] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910473    8194 utils.go:240] > [  +5.003364] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910479    8194 utils.go:240] > [  +5.006294] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910487    8194 utils.go:240] > [  +5.012045] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910493    8194 utils.go:240] > [  +5.006124] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910502    8194 utils.go:240] > [  +5.001443] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910509    8194 utils.go:240] > [  +5.000975] hpet1: lost 320 rtc interrupts
I0621 06:29:49.910516    8194 utils.go:240] > [  +5.005289] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910524    8194 utils.go:240] > [  +5.002124] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910530    8194 utils.go:240] > [  +5.003222] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910546    8194 utils.go:240] > [Jun21 05:27] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910553    8194 utils.go:240] > [  +5.004351] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910562    8194 utils.go:240] > [  +5.002108] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910579    8194 utils.go:240] > [  +5.002960] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910587    8194 utils.go:240] > [  +5.000213] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910620    8194 utils.go:240] > [  +5.001675] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910630    8194 utils.go:240] > [  +5.003600] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910647    8194 utils.go:240] > [  +5.004167] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910654    8194 utils.go:240] > [  +5.002763] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910661    8194 utils.go:240] > [  +5.002282] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910674    8194 utils.go:240] > [  +5.001907] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910687    8194 utils.go:240] > [  +5.004519] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910720    8194 utils.go:240] > [Jun21 05:28] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910733    8194 utils.go:240] > [  +5.002927] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910742    8194 utils.go:240] > [  +5.001698] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910749    8194 utils.go:240] > [  +5.001134] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910756    8194 utils.go:240] > [  +5.005464] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910764    8194 utils.go:240] > [  +4.999329] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910772    8194 utils.go:240] > [  +5.000287] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910778    8194 utils.go:240] > [  +5.000896] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910787    8194 utils.go:240] > [  +5.002106] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910794    8194 utils.go:240] > [  +5.000225] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910804    8194 utils.go:240] > [  +5.000961] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910811    8194 utils.go:240] > [  +5.000395] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910819    8194 utils.go:240] > [Jun21 05:29] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910831    8194 utils.go:240] > [  +5.001243] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910838    8194 utils.go:240] > [  +5.002881] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910846    8194 utils.go:240] > [  +5.004316] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910854    8194 utils.go:240] > [  +5.001537] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910861    8194 utils.go:240] > [  +5.003232] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910869    8194 utils.go:240] > [  +5.003119] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910876    8194 utils.go:240] > [  +5.001300] hpet1: lost 319 rtc interrupts
I0621 06:29:49.910883    8194 utils.go:240] > [  +5.002793] hpet1: lost 318 rtc interrupts
I0621 06:29:49.910890    8194 utils.go:240] > [  +5.003060] hpet1: lost 318 rtc interrupts
I0621 06:29:49.911859    8194 logs.go:76] Gathering logs for kube-apiserver ...
I0621 06:29:49.911875    8194 ssh_runner.go:137] Run with output: docker logs --tail 200 28ef728d43e4
I0621 06:29:49.945667    8194 utils.go:240] ! I0621 05:23:15.215749       1 client.go:352] parsed scheme: ""
I0621 06:29:49.945760    8194 utils.go:240] ! I0621 05:23:15.215774       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.945797    8194 utils.go:240] ! I0621 05:23:15.215803       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.945852    8194 utils.go:240] ! I0621 05:23:15.215840       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.945973    8194 utils.go:240] ! I0621 05:23:15.216217       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.946231    8194 utils.go:240] ! I0621 05:23:15.225586       1 client.go:352] parsed scheme: ""
I0621 06:29:49.946353    8194 utils.go:240] ! I0621 05:23:15.225647       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.946418    8194 utils.go:240] ! I0621 05:23:15.225682       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.946490    8194 utils.go:240] ! I0621 05:23:15.225716       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.946696    8194 utils.go:240] ! I0621 05:23:15.226124       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.946833    8194 utils.go:240] ! I0621 05:23:15.235464       1 client.go:352] parsed scheme: ""
I0621 06:29:49.947037    8194 utils.go:240] ! I0621 05:23:15.235491       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.947312    8194 utils.go:240] ! I0621 05:23:15.235545       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.947332    8194 utils.go:240] ! I0621 05:23:15.235583       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947346    8194 utils.go:240] ! I0621 05:23:15.235760       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947355    8194 utils.go:240] ! I0621 05:23:15.247610       1 client.go:352] parsed scheme: ""
I0621 06:29:49.947376    8194 utils.go:240] ! I0621 05:23:15.247640       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.947391    8194 utils.go:240] ! I0621 05:23:15.247706       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.947405    8194 utils.go:240] ! I0621 05:23:15.247973       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947419    8194 utils.go:240] ! I0621 05:23:15.248313       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947437    8194 utils.go:240] ! I0621 05:23:15.256853       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947458    8194 utils.go:240] ! I0621 05:23:15.257440       1 client.go:352] parsed scheme: ""
I0621 06:29:49.947489    8194 utils.go:240] ! I0621 05:23:15.257462       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.947512    8194 utils.go:240] ! I0621 05:23:15.257489       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.947538    8194 utils.go:240] ! I0621 05:23:15.257741       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947555    8194 utils.go:240] ! I0621 05:23:15.265690       1 client.go:352] parsed scheme: ""
I0621 06:29:49.947578    8194 utils.go:240] ! I0621 05:23:15.265746       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.947602    8194 utils.go:240] ! I0621 05:23:15.265770       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.947625    8194 utils.go:240] ! I0621 05:23:15.265862       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947648    8194 utils.go:240] ! I0621 05:23:15.266137       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947662    8194 utils.go:240] ! I0621 05:23:15.274281       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947672    8194 utils.go:240] ! I0621 05:23:15.274854       1 client.go:352] parsed scheme: ""
I0621 06:29:49.947684    8194 utils.go:240] ! I0621 05:23:15.274988       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.947699    8194 utils.go:240] ! I0621 05:23:15.275055       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.947713    8194 utils.go:240] ! I0621 05:23:15.275181       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947727    8194 utils.go:240] ! I0621 05:23:15.294333       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.947758    8194 utils.go:240] ! W0621 05:23:15.390514       1 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
I0621 06:29:49.947778    8194 utils.go:240] ! W0621 05:23:15.395548       1 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0621 06:29:49.947815    8194 utils.go:240] ! W0621 05:23:15.398682       1 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0621 06:29:49.947841    8194 utils.go:240] ! W0621 05:23:15.399290       1 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0621 06:29:49.947860    8194 utils.go:240] ! W0621 05:23:15.401091       1 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0621 06:29:49.947878    8194 utils.go:240] ! E0621 05:23:16.035455       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.947920    8194 utils.go:240] ! E0621 05:23:16.035632       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.947943    8194 utils.go:240] ! E0621 05:23:16.035898       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.947961    8194 utils.go:240] ! E0621 05:23:16.036117       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.947980    8194 utils.go:240] ! E0621 05:23:16.036154       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.948004    8194 utils.go:240] ! E0621 05:23:16.036167       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0621 06:29:49.948034    8194 utils.go:240] ! I0621 05:23:16.036184       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0621 06:29:49.948060    8194 utils.go:240] ! I0621 05:23:16.036332       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0621 06:29:49.948070    8194 utils.go:240] ! I0621 05:23:16.037773       1 client.go:352] parsed scheme: ""
I0621 06:29:49.948083    8194 utils.go:240] ! I0621 05:23:16.037818       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.948097    8194 utils.go:240] ! I0621 05:23:16.037959       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.948110    8194 utils.go:240] ! I0621 05:23:16.037988       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.948119    8194 utils.go:240] ! I0621 05:23:16.046053       1 client.go:352] parsed scheme: ""
I0621 06:29:49.948131    8194 utils.go:240] ! I0621 05:23:16.046077       1 client.go:352] scheme "" not registered, fallback to default scheme
I0621 06:29:49.948145    8194 utils.go:240] ! I0621 05:23:16.046099       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0621 06:29:49.948159    8194 utils.go:240] ! I0621 05:23:16.046202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.948173    8194 utils.go:240] ! I0621 05:23:16.046491       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.948187    8194 utils.go:240] ! I0621 05:23:16.059879       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0621 06:29:49.948204    8194 utils.go:240] ! I0621 05:23:17.246614       1 secure_serving.go:116] Serving securely on [::]:8443
I0621 06:29:49.948214    8194 utils.go:240] ! I0621 05:23:17.246666       1 crd_finalizer.go:242] Starting CRDFinalizer
I0621 06:29:49.948226    8194 utils.go:240] ! I0621 05:23:17.247086       1 controller.go:81] Starting OpenAPI AggregationController
I0621 06:29:49.948238    8194 utils.go:240] ! I0621 05:23:17.247599       1 available_controller.go:320] Starting AvailableConditionController
I0621 06:29:49.948251    8194 utils.go:240] ! I0621 05:23:17.247622       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0621 06:29:49.948264    8194 utils.go:240] ! I0621 05:23:17.247715       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0621 06:29:49.948278    8194 utils.go:240] ! I0621 05:23:17.247911       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0621 06:29:49.948291    8194 utils.go:240] ! I0621 05:23:17.248374       1 autoregister_controller.go:139] Starting autoregister controller
I0621 06:29:49.948303    8194 utils.go:240] ! I0621 05:23:17.248493       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0621 06:29:49.948317    8194 utils.go:240] ! I0621 05:23:17.248599       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0621 06:29:49.948328    8194 utils.go:240] ! I0621 05:23:17.248757       1 naming_controller.go:284] Starting NamingConditionController
I0621 06:29:49.948343    8194 utils.go:240] ! I0621 05:23:17.248905       1 establishing_controller.go:73] Starting EstablishingController
I0621 06:29:49.948375    8194 utils.go:240] ! I0621 05:23:17.302795       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0621 06:29:49.948418    8194 utils.go:240] ! I0621 05:23:17.302816       1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I0621 06:29:49.948453    8194 utils.go:240] ! E0621 05:23:17.303688       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.164, ResourceVersion: 0, AdditionalErrorMsg:
I0621 06:29:49.948479    8194 utils.go:240] ! I0621 05:23:17.355534       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0621 06:29:49.948493    8194 utils.go:240] ! I0621 05:23:17.355561       1 cache.go:39] Caches are synced for autoregister controller
I0621 06:29:49.948506    8194 utils.go:240] ! I0621 05:23:17.361534       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0621 06:29:49.948518    8194 utils.go:240] ! I0621 05:23:17.403813       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I0621 06:29:49.948531    8194 utils.go:240] ! I0621 05:23:18.245424       1 controller.go:107] OpenAPI AggregationController: Processing item
I0621 06:29:49.948547    8194 utils.go:240] ! I0621 05:23:18.245619       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0621 06:29:49.948578    8194 utils.go:240] ! I0621 05:23:18.245852       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0621 06:29:49.948596    8194 utils.go:240] ! I0621 05:23:18.257623       1 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0621 06:29:49.948611    8194 utils.go:240] ! I0621 05:23:18.269547       1 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0621 06:29:49.948625    8194 utils.go:240] ! I0621 05:23:18.269759       1 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0621 06:29:49.948638    8194 utils.go:240] ! I0621 05:23:18.276561       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0621 06:29:49.948652    8194 utils.go:240] ! I0621 05:23:18.281155       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0621 06:29:49.948666    8194 utils.go:240] ! I0621 05:23:18.287096       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0621 06:29:49.948680    8194 utils.go:240] ! I0621 05:23:18.291141       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0621 06:29:49.948693    8194 utils.go:240] ! I0621 05:23:18.294268       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0621 06:29:49.948705    8194 utils.go:240] ! I0621 05:23:18.297598       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0621 06:29:49.948718    8194 utils.go:240] ! I0621 05:23:18.302204       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0621 06:29:49.948732    8194 utils.go:240] ! I0621 05:23:18.305772       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0621 06:29:49.948746    8194 utils.go:240] ! I0621 05:23:18.310254       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0621 06:29:49.948957    8194 utils.go:240] ! I0621 05:23:18.314488       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0621 06:29:49.948979    8194 utils.go:240] ! I0621 05:23:18.318435       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0621 06:29:49.948997    8194 utils.go:240] ! I0621 05:23:18.321773       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0621 06:29:49.949017    8194 utils.go:240] ! I0621 05:23:18.325479       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0621 06:29:49.949031    8194 utils.go:240] ! I0621 05:23:18.328510       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0621 06:29:49.949054    8194 utils.go:240] ! I0621 05:23:18.333980       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0621 06:29:49.949077    8194 utils.go:240] ! I0621 05:23:18.339230       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0621 06:29:49.949102    8194 utils.go:240] ! I0621 05:23:18.343816       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0621 06:29:49.949122    8194 utils.go:240] ! I0621 05:23:18.348081       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0621 06:29:49.949141    8194 utils.go:240] ! I0621 05:23:18.353979       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0621 06:29:49.949157    8194 utils.go:240] ! I0621 05:23:18.361733       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0621 06:29:49.949174    8194 utils.go:240] ! I0621 05:23:18.366880       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0621 06:29:49.949199    8194 utils.go:240] ! I0621 05:23:18.372224       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0621 06:29:49.949218    8194 utils.go:240] ! I0621 05:23:18.375518       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0621 06:29:49.949244    8194 utils.go:240] ! I0621 05:23:18.379884       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0621 06:29:49.949269    8194 utils.go:240] ! I0621 05:23:18.384125       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0621 06:29:49.949294    8194 utils.go:240] ! I0621 05:23:18.387508       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0621 06:29:49.949314    8194 utils.go:240] ! I0621 05:23:18.391479       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0621 06:29:49.949329    8194 utils.go:240] ! I0621 05:23:18.395215       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0621 06:29:49.949345    8194 utils.go:240] ! I0621 05:23:18.399124       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0621 06:29:49.949363    8194 utils.go:240] ! I0621 05:23:18.403905       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0621 06:29:49.949378    8194 utils.go:240] ! I0621 05:23:18.408209       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0621 06:29:49.949394    8194 utils.go:240] ! I0621 05:23:18.411916       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0621 06:29:49.949409    8194 utils.go:240] ! I0621 05:23:18.415568       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0621 06:29:49.949424    8194 utils.go:240] ! I0621 05:23:18.418989       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0621 06:29:49.949443    8194 utils.go:240] ! I0621 05:23:18.422968       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0621 06:29:49.949468    8194 utils.go:240] ! I0621 05:23:18.427094       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0621 06:29:49.949485    8194 utils.go:240] ! I0621 05:23:18.432513       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0621 06:29:49.949500    8194 utils.go:240] ! I0621 05:23:18.438423       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0621 06:29:49.949515    8194 utils.go:240] ! I0621 05:23:18.442798       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0621 06:29:49.949530    8194 utils.go:240] ! I0621 05:23:18.447155       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0621 06:29:49.949569    8194 utils.go:240] ! I0621 05:23:18.450715       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0621 06:29:49.949589    8194 utils.go:240] ! I0621 05:23:18.455940       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0621 06:29:49.949606    8194 utils.go:240] ! I0621 05:23:18.460608       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0621 06:29:49.949621    8194 utils.go:240] ! I0621 05:23:18.464841       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0621 06:29:49.949638    8194 utils.go:240] ! I0621 05:23:18.468798       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0621 06:29:49.949653    8194 utils.go:240] ! I0621 05:23:18.472371       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0621 06:29:49.949668    8194 utils.go:240] ! I0621 05:23:18.477090       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0621 06:29:49.949684    8194 utils.go:240] ! I0621 05:23:18.480719       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0621 06:29:49.949700    8194 utils.go:240] ! I0621 05:23:18.486091       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0621 06:29:49.949715    8194 utils.go:240] ! I0621 05:23:18.490488       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0621 06:29:49.949730    8194 utils.go:240] ! I0621 05:23:18.497275       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0621 06:29:49.949745    8194 utils.go:240] ! I0621 05:23:18.510370       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0621 06:29:49.949761    8194 utils.go:240] ! I0621 05:23:18.551213       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0621 06:29:49.949782    8194 utils.go:240] ! I0621 05:23:18.592272       1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0621 06:29:49.949796    8194 utils.go:240] ! I0621 05:23:18.632447       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0621 06:29:49.949811    8194 utils.go:240] ! I0621 05:23:18.670649       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0621 06:29:49.949825    8194 utils.go:240] ! I0621 05:23:18.710001       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0621 06:29:49.949843    8194 utils.go:240] ! I0621 05:23:18.750929       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0621 06:29:49.949857    8194 utils.go:240] ! I0621 05:23:18.790446       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0621 06:29:49.949873    8194 utils.go:240] ! I0621 05:23:18.831305       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0621 06:29:49.949887    8194 utils.go:240] ! I0621 05:23:18.871602       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0621 06:29:49.949901    8194 utils.go:240] ! I0621 05:23:18.910479       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0621 06:29:49.949917    8194 utils.go:240] ! I0621 05:23:18.950492       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0621 06:29:49.949931    8194 utils.go:240] ! I0621 05:23:18.991081       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0621 06:29:49.949944    8194 utils.go:240] ! I0621 05:23:19.031139       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0621 06:29:49.949960    8194 utils.go:240] ! I0621 05:23:19.070477       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0621 06:29:49.949983    8194 utils.go:240] ! I0621 05:23:19.110979       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0621 06:29:49.950000    8194 utils.go:240] ! I0621 05:23:19.115250       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0621 06:29:49.950017    8194 utils.go:240] ! I0621 05:23:19.150672       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0621 06:29:49.950039    8194 utils.go:240] ! I0621 05:23:19.192619       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0621 06:29:49.950069    8194 utils.go:240] ! I0621 05:23:19.232085       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0621 06:29:49.950109    8194 utils.go:240] ! I0621 05:23:19.270491       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0621 06:29:49.950134    8194 utils.go:240] ! I0621 05:23:19.310368       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0621 06:29:49.950156    8194 utils.go:240] ! I0621 05:23:19.351540       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0621 06:29:49.950174    8194 utils.go:240] ! I0621 05:23:19.390969       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0621 06:29:49.950198    8194 utils.go:240] ! I0621 05:23:19.435440       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0621 06:29:49.950219    8194 utils.go:240] ! I0621 05:23:19.471781       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0621 06:29:49.950237    8194 utils.go:240] ! I0621 05:23:19.510026       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0621 06:29:49.950253    8194 utils.go:240] ! I0621 05:23:19.550542       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0621 06:29:49.950269    8194 utils.go:240] ! I0621 05:23:19.590653       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0621 06:29:49.950289    8194 utils.go:240] ! I0621 05:23:19.631419       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0621 06:29:49.950317    8194 utils.go:240] ! I0621 05:23:19.672243       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0621 06:29:49.950337    8194 utils.go:240] ! I0621 05:23:19.710441       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0621 06:29:49.950354    8194 utils.go:240] ! I0621 05:23:19.761456       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0621 06:29:49.950370    8194 utils.go:240] ! I0621 05:23:19.792184       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0621 06:29:49.950387    8194 utils.go:240] ! I0621 05:23:19.833175       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0621 06:29:49.950402    8194 utils.go:240] ! I0621 05:23:19.870573       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0621 06:29:49.950419    8194 utils.go:240] ! I0621 05:23:19.910271       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0621 06:29:49.950434    8194 utils.go:240] ! I0621 05:23:19.951626       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0621 06:29:49.950450    8194 utils.go:240] ! I0621 05:23:19.991151       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0621 06:29:49.950466    8194 utils.go:240] ! I0621 05:23:20.032423       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0621 06:29:49.950482    8194 utils.go:240] ! I0621 05:23:20.070533       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0621 06:29:49.950497    8194 utils.go:240] ! I0621 05:23:20.108293       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0621 06:29:49.950513    8194 utils.go:240] ! I0621 05:23:20.110894       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0621 06:29:49.950529    8194 utils.go:240] ! I0621 05:23:20.150690       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0621 06:29:49.950544    8194 utils.go:240] ! I0621 05:23:20.191366       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0621 06:29:49.950559    8194 utils.go:240] ! I0621 05:23:20.230895       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0621 06:29:49.950576    8194 utils.go:240] ! I0621 05:23:20.270767       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0621 06:29:49.950592    8194 utils.go:240] ! I0621 05:23:20.311905       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0621 06:29:49.950605    8194 utils.go:240] ! I0621 05:23:20.328788       1 controller.go:606] quota admission added evaluator for: endpoints
I0621 06:29:49.950632    8194 utils.go:240] ! I0621 05:23:20.352629       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0621 06:29:49.950647    8194 utils.go:240] ! I0621 05:23:20.388926       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0621 06:29:49.950667    8194 utils.go:240] ! I0621 05:23:20.391752       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0621 06:29:49.950685    8194 utils.go:240] ! I0621 05:23:20.430280       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0621 06:29:49.950702    8194 utils.go:240] ! I0621 05:23:20.471146       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0621 06:29:49.950718    8194 utils.go:240] ! I0621 05:23:20.510337       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0621 06:29:49.950737    8194 utils.go:240] ! I0621 05:23:20.551471       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0621 06:29:49.950753    8194 utils.go:240] ! I0621 05:23:20.590771       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0621 06:29:49.950769    8194 utils.go:240] ! I0621 05:23:20.633037       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0621 06:29:49.950786    8194 utils.go:240] ! W0621 05:23:20.814341       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.164]
I0621 06:29:49.950799    8194 utils.go:240] ! I0621 05:23:21.199652       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0621 06:29:49.950812    8194 utils.go:240] ! I0621 05:23:21.906547       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0621 06:29:49.950824    8194 utils.go:240] ! I0621 05:23:22.186836       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0621 06:29:49.950836    8194 utils.go:240] ! I0621 05:23:27.725890       1 controller.go:606] quota admission added evaluator for: namespaces
I0621 06:29:49.950848    8194 utils.go:240] ! I0621 05:23:28.329173       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0621 06:29:49.950861    8194 utils.go:240] ! I0621 05:23:28.356565       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0621 06:29:49.953501    8194 logs.go:76] Gathering logs for coredns ...
I0621 06:29:49.953510    8194 ssh_runner.go:137] Run with output: docker logs --tail 200 4bc600330725
I0621 06:29:49.994561    8194 utils.go:240] > .:53
I0621 06:29:49.994586    8194 utils.go:240] > 2019-06-21T05:23:50.493Z [INFO] CoreDNS-1.3.1
I0621 06:29:49.994597    8194 utils.go:240] > 2019-06-21T05:23:50.494Z [INFO] linux/amd64, go1.11.4, 6b56a9c
I0621 06:29:49.994603    8194 utils.go:240] > CoreDNS-1.3.1
I0621 06:29:49.994610    8194 utils.go:240] > linux/amd64, go1.11.4, 6b56a9c
I0621 06:29:49.994624    8194 utils.go:240] > 2019-06-21T05:23:50.494Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
W0621 06:29:49.996499    8194 exit.go:100] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: connect: network is unreachable

šŸ’£  Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
Temporary Error: creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: i/o timeout
creating clusterrolebinding: Post https://192.168.99.164:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.164:8443: connect: network is unreachable

šŸ˜æ  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
šŸ‘‰  https://github.com/kubernetes/minikube/issues/new
kmayura1 commented 5 years ago

Hi, any update please?

tigerpeng2001 commented 5 years ago

I ran into similar issue today. Checked the log, it looked like some issues with the VirtualBox. So I tried uninstall/installed it, now I can start the Minikube. The VirtualBox is installed with dmg file.

kmayura1 commented 5 years ago

sadly, these are the logs even after reinstalling virtualbox from scratch , šŸš€ Launching Kubernetes ...

šŸ’£ Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout Temporary Error: creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: i/o timeout creating clusterrolebinding: Post https://192.168.99.165:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.165:8443: connect: network is unreachable

šŸ˜æ Sorry that minikube crashed. If this was unexpected, we would love to hear from you: šŸ‘‰ https://github.com/kubernetes/minikube/issues/new āŒ Problems detected in "kube-addon-manager": error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused error: no objeWRN: == Error gettcts passed to apply

tstromberg commented 5 years ago

I'm not sure what's going on with the apiserver here, but it's mighty suspicious.

If you don't mind, I'm curious if minikube v1.2.0 / kubernetes 1.15 suffers this issue. Please run minikube delete first to clear up the old state.

kmayura1 commented 5 years ago

logs.txt unfortuantely that didn't help, I tried to remove whole minikube and reinstalled, still same issue. šŸ˜„ minikube v1.2.0 on darwin (amd64) šŸ”„ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... šŸ³ Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6 šŸšœ Pulling images ... šŸš€ Launching Kubernetes ...

attached minikube logs

bzied321 commented 5 years ago

Hi, I experienced the same problem on macOS Mojave 10.14.5 and fixed it using suggestions from other issue threads. Reinstalling both VirtualBox and minikube did not fix anything - the source of my problem was being connected to Cisco AnyConnect VPN. Here are the steps I used:

$ minikube stop $ minikube delete $ brew cask uninstall minikube $ rm -rf ~/.minikube ~/.kube Go to https://www.virtualbox.org/wiki/Downloads, use VirtualBox_Uninstall.tool script provided in OS X host .dm file Disconnect from VPN Restart laptop, make sure that you are not reconnected to VPN Install VirtualBox using VirtualBox.pkg from the same .dmg file as the previous step. $ brew cask install minikube $ minikube start --alsologtostderr -v=9 Connect to VPN (if you wish)

Hope this helps. If anyone has suggestions for starting minikube while connected to a VPN like Cisco AnyConnect, please let me know! Thanks

kmayura1 commented 5 years ago

Hi, I experienced the same problem on macOS Mojave 10.14.5 and fixed it using suggestions from other issue threads. Reinstalling both VirtualBox and minikube did not fix anything - the source of my problem was being connected to Cisco AnyConnect VPN. Here are the steps I used:

$ minikube stop $ minikube delete $ brew cask uninstall minikube $ rm -rf ~/.minikube ~/.kube Go to https://www.virtualbox.org/wiki/Downloads, use VirtualBox_Uninstall.tool script provided in OS X host .dm file Disconnect from VPN Restart laptop, make sure that you are not reconnected to VPN Install VirtualBox using VirtualBox.pkg from the same .dmg file as the previous step. $ brew cask install minikube $ minikube start --alsologtostderr -v=9 Connect to VPN (if you wish)

Hope this helps. If anyone has suggestions for starting minikube while connected to a VPN like Cisco AnyConnect, please let me know! Thanks

Thanks verymuch , you really solved my struggle working in vm inside Mac.. this really solved my issue. btw what is --alsologtostderr -v=9?

bzied321 commented 5 years ago

@kmayura1 those flags do not fundamentally change the minikube start command.

--alsologtostderr : logs to standard error as well as files -v=9 : parses logs into more human-readable format

den-is commented 5 years ago

any updates still getting same with minikube 1.3.0 and not 1.3.1 macos 10.14.6 (18G87) virtualbox 6.0.10 r132072

I0817 20:52:38.462621   58542 kubeadm.go:241] Configuring cluster permissions ...
I0817 20:53:38.498071   58542 utils.go:130] error: Temporary Error: creating clusterrolebinding: Post https://192.168.99.119:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings?timeout=1m0s: dial tcp 192.168.99.119:8443: i/o timeout - sleeping 500ms
I0817 20:53:38.998176   58542 utils.go:119] retry loop 1

and loop never exits same even after suggested above full reset of minikube on host... and using manually downloaded minikube binary release

philwhitwell commented 5 years ago

Hi, I experienced the same problem on macOS Mojave 10.14.5 and fixed it using suggestions from other issue threads. Reinstalling both VirtualBox and minikube did not fix anything - the source of my problem was being connected to Cisco AnyConnect VPN. Here are the steps I used:

$ minikube stop $ minikube delete $ brew cask uninstall minikube $ rm -rf ~/.minikube ~/.kube Go to https://www.virtualbox.org/wiki/Downloads, use VirtualBox_Uninstall.tool script provided in OS X host .dm file Disconnect from VPN Restart laptop, make sure that you are not reconnected to VPN Install VirtualBox using VirtualBox.pkg from the same .dmg file as the previous step. $ brew cask install minikube $ minikube start --alsologtostderr -v=9 Connect to VPN (if you wish)

Hope this helps. If anyone has suggestions for starting minikube while connected to a VPN like Cisco AnyConnect, please let me know! Thanks

This was helpful but just to save the pain. I did not need to uninstall VirtualBox and restart

tstromberg commented 5 years ago

Heads up: After refactoring code related to RBAC privilege elevation, I suspect that minikube v1.4.0b2 addresses this bug.

g-boros commented 4 years ago

@tstromberg it looks like it's still having issues with v1.4.0b2. will check if related to Cisco VPN.

Ā± % minikube version
minikube version: v1.4.0-beta.2
commit: d1e468085d9af12e5d130a6cb3b2186a5db87a0e

Ā± % minikube start -v 8 --alsologtostderr
I0919 16:18:40.381055   47875 notify.go:125] Checking for updates...
I0919 16:18:41.350194   47875 start.go:235] hostinfo: {"hostname":"HW15399.local","uptime":1990944,"bootTime":1566911777,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.14","kernelVersion":"18.0.0","virtualizationSystem":"","virtualizationRole":"","hostid":"df83ad37-501e-3b4f-b1f0-04f3ac90fe35"}
W0919 16:18:41.350329   47875 start.go:243] gopshost.Virtualization returned error: not implemented yet
šŸ˜„  minikube v1.4.0-beta.2 on Darwin 10.14
...
...
I0919 16:23:05.570938   47875 utils.go:167] > Sep 19 14:21:12 minikube dockerd[2390]: time="2019-09-19T14:21:12.058423937Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/688886c21d14f92cef52f6f1e181969a12fbc5c947794f73df895938dabd3073/shim.sock" debug=false pid=4758
I0919 16:23:05.570965   47875 utils.go:167] > Sep 19 14:21:12 minikube dockerd[2390]: time="2019-09-19T14:21:12.211119163Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b0e7b774a511fc79518518d3593865dd9c337e70af97c98240cddc86324c72e3/shim.sock" debug=false pid=4823
I0919 16:23:05.574212   47875 logs.go:78] Gathering logs for container status ...
I0919 16:23:05.574223   47875 ssh_runner.go:138] Run with output: sudo crictl ps -a || sudo docker ps -a
I0919 16:23:05.590164   47875 utils.go:167] > CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
I0919 16:23:05.593224   47875 utils.go:167] > b0e7b774a511f       4689081edb103       About a minute ago   Running             storage-provisioner       0                   688886c21d14f
I0919 16:23:05.593294   47875 utils.go:167] > a2c19233bb576       bf261d1579144       About a minute ago   Running             coredns                   0                   05761bd696d3a
I0919 16:23:05.593327   47875 utils.go:167] > 2fe41b6e549c9       bf261d1579144       About a minute ago   Running             coredns                   0                   8abe6a4444dce
I0919 16:23:05.593354   47875 utils.go:167] > d0a44b700f493       63b54e18db505       About a minute ago   Running             kube-proxy                0                   5184a3279343f
I0919 16:23:05.593380   47875 utils.go:167] > 45de98519df8c       696ee147a625a       2 minutes ago        Running             kube-apiserver            0                   a989c8c3776b4
I0919 16:23:05.593433   47875 utils.go:167] > 8978e237f08e8       1020041ba01cb       2 minutes ago        Running             kube-scheduler            0                   13223da85098b
I0919 16:23:05.593710   47875 utils.go:167] > 4fe6e5ec1ccd3       b2756210eeabf       2 minutes ago        Running             etcd                      0                   b71eef9515abf
I0919 16:23:05.593761   47875 utils.go:167] > 74d24c3d2fc5c       163c0e28a1669       2 minutes ago        Running             kube-controller-manager   0                   d447eb648d91f
I0919 16:23:05.593795   47875 utils.go:167] > 12bd7de77e461       119701e77cbc4       2 minutes ago        Running             kube-addon-manager        0                   949a80c2bea70
W0919 16:23:05.595332   47875 exit.go:99] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.103:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.103:8443: i/o timeout

šŸ’£  Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.103:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.103:8443: i/o timeout

šŸ˜æ  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
šŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
tstromberg commented 4 years ago

@kmayura1 I believe this issue is now addressed by minikube v1.4, as it reliably passes the correct context to the elevate function. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with /reopen

Thank you for reporting this issue!

holleyism commented 4 years ago

Just downloaded 1.4 and ran into the same error.

I0920 09:29:07.234201   57754 utils.go:167] ! I0920 14:26:58.213281       1 serving.go:319] Generated self-signed cert in-memory
I0920 09:29:07.234254   57754 utils.go:167] ! W0920 14:27:01.065843       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I0920 09:29:07.234283   57754 utils.go:167] ! W0920 14:27:01.066159       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
I0920 09:29:07.234300   57754 utils.go:167] ! W0920 14:27:01.066312       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
I0920 09:29:07.234317   57754 utils.go:167] ! W0920 14:27:01.066360       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0920 09:29:07.234325   57754 utils.go:167] ! I0920 14:27:01.068591       1 server.go:143] Version: v1.16.0
I0920 09:29:07.234340   57754 utils.go:167] ! I0920 14:27:01.071885       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
I0920 09:29:07.234349   57754 utils.go:167] ! W0920 14:27:01.076001       1 authorization.go:47] Authorization is disabled
I0920 09:29:07.234359   57754 utils.go:167] ! W0920 14:27:01.076070       1 authentication.go:79] Authentication is disabled
I0920 09:29:07.234371   57754 utils.go:167] ! I0920 14:27:01.076120       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0920 09:29:07.234386   57754 utils.go:167] ! I0920 14:27:01.085625       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
I0920 09:29:07.234409   57754 utils.go:167] ! E0920 14:27:01.155629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0920 09:29:07.234438   57754 utils.go:167] ! E0920 14:27:01.167245       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0920 09:29:07.234464   57754 utils.go:167] ! E0920 14:27:01.167604       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0920 09:29:07.234491   57754 utils.go:167] ! E0920 14:27:01.168007       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster
scope
I0920 09:29:07.234535   57754 utils.go:167] ! E0920 14:27:01.168116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at
the cluster scope
I0920 09:29:07.234577   57754 utils.go:167] ! E0920 14:27:01.169817       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0920 09:29:07.234715   57754 utils.go:167] ! E0920 14:27:01.169886       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0920 09:29:07.235219   57754 utils.go:167] ! E0920 14:27:01.169915       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0920 09:29:07.235268   57754 utils.go:167] ! E0920 14:27:01.169958       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0920 09:29:07.235303   57754 utils.go:167] ! E0920 14:27:01.175752       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0920 09:29:07.235863   57754 utils.go:167] ! E0920 14:27:01.176775       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster
scope
I0920 09:29:07.235896   57754 utils.go:167] ! E0920 14:27:02.157606       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0920 09:29:07.235924   57754 utils.go:167] ! E0920 14:27:02.169052       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0920 09:29:07.235958   57754 utils.go:167] ! E0920 14:27:02.170430       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0920 09:29:07.235986   57754 utils.go:167] ! E0920 14:27:02.173179       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster
scope
I0920 09:29:07.236015   57754 utils.go:167] ! E0920 14:27:02.173245       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at
the cluster scope
I0920 09:29:07.236041   57754 utils.go:167] ! E0920 14:27:02.174140       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0920 09:29:07.236063   57754 utils.go:167] ! E0920 14:27:02.177499       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0920 09:29:07.236789   57754 utils.go:167] ! E0920 14:27:02.180380       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0920 09:29:07.236824   57754 utils.go:167] ! E0920 14:27:02.182731       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0920 09:29:07.236869   57754 utils.go:167] ! E0920 14:27:02.186453       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0920 09:29:07.236910   57754 utils.go:167] ! E0920 14:27:02.188026       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster
scope
I0920 09:29:07.236926   57754 utils.go:167] ! I0920 14:27:04.104163       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-scheduler...
I0920 09:29:07.236938   57754 utils.go:167] ! I0920 14:27:04.133869       1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
I0920 09:29:07.238590   57754 logs.go:78] Gathering logs for storage-provisioner [b53dfd77ed27] ...
I0920 09:29:07.238604   57754 ssh_runner.go:138] Run with output: docker logs --tail 200 b53dfd77ed27
W0920 09:29:07.287585   57754 exit.go:101] Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.111:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.111:8443: i/o timeout
*
X Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.111:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.111:8443: i/o timeout
*
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
  - https://github.com/kubernetes/minikube/issues/new/choose
* Problems detected in kube-addon-manager [a7de3d41615c]:
  - error: no objects passed tserviceaccount/storage-provisioner unchanged

edited: I went back and uninstalled/reinstalled VirtualBox and was able to start. I then deleted and started my VPN and ran minikube start and it worked as well.

k8s-ci-robot commented 4 years ago

@holleyism: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes/minikube/issues/4540#issuecomment-533579624): >/reopen >Just downloaded 1.4 and ran into the same error. >``` > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
den-is commented 4 years ago

same here.. damn it......... i had no time to restart reinstall and to do rain-dance but for now, simple AnyConnect disabling didn't help

macos 10.14.6; minikube 1.4; virtualbox 6.0.12 r133076

sharifelgamal commented 4 years ago

/reopen

k8s-ci-robot commented 4 years ago

@sharifelgamal: Reopened this issue.

In response to [this](https://github.com/kubernetes/minikube/issues/4540#issuecomment-533630178): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tstromberg commented 4 years ago

If you see this message with minikube v1.4.0 (or newer), please attach the output of:

den-is commented 4 years ago

UPD: minikube initial setup worked after computer restart without components reinstall.

minikube on mac installed via brew

Then stop/start cycle worked fine with enabled AnyConnect

tstromberg commented 4 years ago

Based on other issue reports, this issue still can happen in minikube v1.4, and appears to be mostly constrained VirtualBox users. The gist of the issue seems to be that there is a connectivity issue between the host and the VirtualBox VM (ip:8443), probably due to firewall or VPN interference.

If you are experiencing this, I would love to see the output of:

minikube ssh sudo pgrep apiserver minikube status

As a workaround, I highly recommend using the minikube hyperkit driver:

minikube start --vm-driver=hyperkit

For more information, see:

https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/

SolrSeeker commented 4 years ago

minikube 1.4.0 kubectl 1.16.0 Virtualbox 6.0.12 r133076

minikube start --alsologtostderr -v=3

I0930 13:40:29.255898 49013 notify.go:125] Checking for updates... I0930 13:40:29.493139 49013 start.go:236] hostinfo: {"hostname":"6c4008accb14.ant.amazon.com","uptime":332360,"bootTime":1569543669,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.14.6","kernelVersion":"18.7.0","virtualizationSystem":"","virtualizationRole":"","hostid":"c41337a1-0ec3-3896-a954-a1f85e849d53"} W0930 13:40:29.493259 49013 start.go:244] gopshost.Virtualization returned error: not implemented yet * minikube v1.4.0 on Darwin 10.14.6 I0930 13:40:29.496354 49013 downloader.go:59] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.4.0.iso I0930 13:40:29.496482 49013 profile.go:66] Saving config to /Users/limartin/.minikube/profiles/minikube/config.json ... I0930 13:40:29.496634 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/profiles/minikube/config.json.tmp199841076" with filemode -rw------- I0930 13:40:29.497026 49013 cache_images.go:295] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 I0930 13:40:29.497034 49013 cache_images.go:295] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 I0930 13:40:29.497151 49013 cache_images.go:295] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 I0930 13:40:29.497175 49013 cache_images.go:295] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 I0930 13:40:29.497306 49013 cache_images.go:295] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 I0930 13:40:29.497364 49013 cache_images.go:295] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 I0930 13:40:29.497085 49013 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 I0930 13:40:29.497207 49013 cache_images.go:295] CacheImage: k8s.gcr.io/pause:3.1 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 I0930 13:40:29.497025 49013 cache_images.go:295] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 I0930 13:40:29.497111 49013 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 I0930 13:40:29.497126 49013 cache_images.go:295] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 I0930 13:40:29.497712 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists I0930 13:40:29.497724 49013 cache_images.go:297] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 completed in 744.131Āµs I0930 13:40:29.497777 49013 cache_images.go:82] CacheImage k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded I0930 13:40:29.497805 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 exists I0930 13:40:29.497875 49013 cache_images.go:297] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 completed in 573.365Āµs I0930 13:40:29.497758 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists I0930 13:40:29.497909 49013 cache_images.go:297] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 881.454Āµs I0930 13:40:29.497894 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists I0930 13:40:29.497923 49013 cache_images.go:82] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded I0930 13:40:29.497931 49013 cache_images.go:297] CacheImage: k8s.gcr.io/pause:3.1 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 743.607Āµs I0930 13:40:29.497958 49013 cache_images.go:82] CacheImage k8s.gcr.io/pause:3.1 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded I0930 13:40:29.497231 49013 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 I0930 13:40:29.498085 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists I0930 13:40:29.498175 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists I0930 13:40:29.498211 49013 cache_images.go:297] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 completed in 1.215776ms I0930 13:40:29.498155 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists I0930 13:40:29.498232 49013 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 1.157269ms I0930 13:40:29.498277 49013 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded I0930 13:40:29.497771 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 exists I0930 13:40:29.498317 49013 cache_images.go:297] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 completed in 972.255Āµs I0930 13:40:29.497128 49013 cache_images.go:295] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 I0930 13:40:29.498352 49013 cache_images.go:82] CacheImage k8s.gcr.io/kube-addon-manager:v9.0.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 succeeded I0930 13:40:29.497901 49013 cache_images.go:82] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 succeeded I0930 13:40:29.498163 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists I0930 13:40:29.498368 49013 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 1.153338ms I0930 13:40:29.498390 49013 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded I0930 13:40:29.497765 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists I0930 13:40:29.498440 49013 cache_images.go:297] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 1.281659ms I0930 13:40:29.498472 49013 cache_images.go:82] CacheImage k8s.gcr.io/coredns:1.6.2 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded I0930 13:40:29.498436 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists I0930 13:40:29.498499 49013 cache_images.go:297] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 completed in 1.400318ms I0930 13:40:29.498514 49013 cache_images.go:82] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded I0930 13:40:29.498173 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists I0930 13:40:29.498525 49013 cache_images.go:297] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 completed in 1.37741ms I0930 13:40:29.498540 49013 cache_images.go:82] CacheImage k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded I0930 13:40:29.498186 49013 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 1.081386ms I0930 13:40:29.498555 49013 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded I0930 13:40:29.498226 49013 cache_images.go:82] CacheImage k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded I0930 13:40:29.498169 49013 cache_images.go:301] /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists I0930 13:40:29.498643 49013 cache_images.go:297] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 1.519789ms I0930 13:40:29.498652 49013 cache_images.go:82] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded I0930 13:40:29.498672 49013 cache_images.go:89] Successfully cached all images. I0930 13:40:29.498778 49013 cluster.go:98] Skipping create...Using existing machine configuration * Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. I0930 13:40:29.605784 49013 cluster.go:110] Machine state: Stopped * Starting existing virtualbox VM for "minikube" ... I0930 13:40:29.710698 49013 main.go:104] libmachine: Check network to re-create if needed... I0930 13:40:30.563115 49013 main.go:104] libmachine: Waiting for an IP... I0930 13:41:14.418513 49013 cluster.go:128] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com} * Waiting for the host to be provisioned ... I0930 13:41:14.418640 49013 cluster.go:149] configureHost: &{BaseDriver:0xc000176200 VBoxManager:0xc0000be188 HostInterfaces:0x3a39a60 b2dUpdater:0x3a39a60 sshKeyGenerator:0x3a39a60 diskCreator:0x3a39a60 logsReader:0x3a39a60 ipWaiter:0x3a39a60 randomInter:0xc0000be1a8 sleeper:0x3a39a60 CPU:2 Memory:2000 DiskSize:20000 NatNicType:virtio Boot2DockerURL:file:///Users/limartin/.minikube/cache/iso/minikube-v1.4.0.iso Boot2DockerImportVM: HostDNSResolver:true HostOnlyCIDR:192.168.99.1/24 HostOnlyNicType:virtio HostOnlyPromiscMode:deny UIType:headless HostOnlyNoDHCP:false NoShare:false DNSProxy:false NoVTXCheck:false ShareFolder:} I0930 13:41:14.418731 49013 cluster.go:171] Configuring auth for driver virtualbox ... I0930 13:41:14.418757 49013 main.go:104] libmachine: Waiting for SSH to be available... I0930 13:41:14.500781 49013 main.go:104] libmachine: Detecting the provisioner... I0930 13:41:15.340007 49013 ssh_runner.go:170] Transferring 1042 bytes to /etc/docker/ca.pem I0930 13:41:15.340815 49013 ssh_runner.go:189] ca.pem: copied 1042 bytes I0930 13:41:15.350163 49013 ssh_runner.go:170] Transferring 1115 bytes to /etc/docker/server.pem I0930 13:41:15.351023 49013 ssh_runner.go:189] server.pem: copied 1115 bytes I0930 13:41:15.360049 49013 ssh_runner.go:170] Transferring 1675 bytes to /etc/docker/server-key.pem I0930 13:41:15.360874 49013 ssh_runner.go:189] server-key.pem: copied 1675 bytes I0930 13:41:15.463983 49013 main.go:104] libmachine: Setting Docker configuration on the remote daemon... I0930 13:41:16.321619 49013 cluster.go:202] guest clock: 1569876076.319658457 I0930 13:41:16.321646 49013 cluster.go:215] Guest: 2019-09-30 13:41:16.319658457 -0700 PDT Remote: 2019-09-30 13:41:16.246834 -0700 PDT m=+47.025349316 (delta=72.824457ms) I0930 13:41:16.321702 49013 cluster.go:186] guest clock delta is within tolerance: 72.824457ms I0930 13:41:16.321710 49013 cluster.go:151] configureHost completed within 1.903020806s I0930 13:41:16.683104 49013 profile.go:66] Saving config to /Users/limartin/.minikube/profiles/minikube/config.json ... I0930 13:41:16.683412 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/profiles/minikube/config.json.tmp118118534" with filemode -rw------- I0930 13:41:16.720683 49013 ssh_runner.go:102] SSH: systemctl is-active --quiet service containerd I0930 13:41:16.731931 49013 ssh_runner.go:102] SSH: systemctl is-active --quiet service crio I0930 13:41:16.742471 49013 ssh_runner.go:102] SSH: sudo systemctl stop crio I0930 13:41:16.794121 49013 ssh_runner.go:102] SSH: systemctl is-active --quiet service crio I0930 13:41:16.803895 49013 ssh_runner.go:102] SSH: sudo systemctl start docker I0930 13:41:17.620030 49013 ssh_runner.go:138] Run with output: docker version --format '{{.Server.Version}}' * Preparing Kubernetes v1.16.0 on Docker 18.09.9 ... I0930 13:41:17.980418 49013 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:} I0930 13:41:17.980676 49013 settings.go:132] Updating kubeconfig: /Users/limartin/.kube/config I0930 13:41:17.986648 49013 lock.go:41] attempting to write to file "/Users/limartin/.kube/config" with filemode -rw------- I0930 13:41:18.020425 49013 cache_images.go:95] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 kubernetesui/dashboard:v2.0.0-beta4 k8s.gcr.io/kube-addon-manager:v9.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1] I0930 13:41:18.020816 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 I0930 13:41:18.020849 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 I0930 13:41:18.020889 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 I0930 13:41:18.020841 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 I0930 13:41:18.020820 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 I0930 13:41:18.020867 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 I0930 13:41:18.020822 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 I0930 13:41:18.020880 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 I0930 13:41:18.020879 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 I0930 13:41:18.021025 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 I0930 13:41:18.021057 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 I0930 13:41:18.021067 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 I0930 13:41:18.021077 49013 cache_images.go:210] Loading image from cache: /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 I0930 13:41:18.024593 49013 ssh_runner.go:170] Transferring 30888448 bytes to /var/lib/minikube/images/kube-proxy_v1.16.0 I0930 13:41:18.024651 49013 ssh_runner.go:170] Transferring 11769344 bytes to /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13 I0930 13:41:18.024665 49013 ssh_runner.go:170] Transferring 50498560 bytes to /var/lib/minikube/images/kube-apiserver_v1.16.0 I0930 13:41:18.024593 49013 ssh_runner.go:170] Transferring 20683776 bytes to /var/lib/minikube/images/storage-provisioner_v1.8.1 I0930 13:41:18.024608 49013 ssh_runner.go:170] Transferring 14125568 bytes to /var/lib/minikube/images/coredns_1.6.2 I0930 13:41:18.024627 49013 ssh_runner.go:170] Transferring 318976 bytes to /var/lib/minikube/images/pause_3.1 I0930 13:41:18.024632 49013 ssh_runner.go:170] Transferring 85501440 bytes to /var/lib/minikube/images/etcd_3.3.15-0 I0930 13:41:18.024638 49013 ssh_runner.go:170] Transferring 30519808 bytes to /var/lib/minikube/images/kube-addon-manager_v9.0.2 I0930 13:41:18.024642 49013 ssh_runner.go:170] Transferring 14267904 bytes to /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13 I0930 13:41:18.024646 49013 ssh_runner.go:170] Transferring 31408640 bytes to /var/lib/minikube/images/kube-scheduler_v1.16.0 I0930 13:41:18.024647 49013 ssh_runner.go:170] Transferring 35855360 bytes to /var/lib/minikube/images/dashboard_v2.0.0-beta4 I0930 13:41:18.024652 49013 ssh_runner.go:170] Transferring 48862720 bytes to /var/lib/minikube/images/kube-controller-manager_v1.16.0 I0930 13:41:18.024660 49013 ssh_runner.go:170] Transferring 12207616 bytes to /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13 I0930 13:41:18.206825 49013 ssh_runner.go:189] pause_3.1: copied 318976 bytes I0930 13:41:18.377171 49013 docker.go:97] Loading image: /var/lib/minikube/images/pause_3.1 I0930 13:41:18.377195 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/pause_3.1 I0930 13:41:19.523000 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache I0930 13:41:25.553339 49013 ssh_runner.go:189] k8s-dns-dnsmasq-nanny-amd64_1.14.13: copied 11769344 bytes I0930 13:41:25.647759 49013 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13 I0930 13:41:25.647794 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13 I0930 13:41:25.710097 49013 ssh_runner.go:189] k8s-dns-sidecar-amd64_1.14.13: copied 12207616 bytes I0930 13:41:27.607218 49013 ssh_runner.go:189] coredns_1.6.2: copied 14125568 bytes I0930 13:41:27.688632 49013 ssh_runner.go:189] k8s-dns-kube-dns-amd64_1.14.13: copied 14267904 bytes I0930 13:41:27.793006 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache I0930 13:41:27.793064 49013 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13 I0930 13:41:27.793081 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13 I0930 13:41:29.897470 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 from cache I0930 13:41:29.897517 49013 docker.go:97] Loading image: /var/lib/minikube/images/coredns_1.6.2 I0930 13:41:29.897528 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/coredns_1.6.2 I0930 13:41:30.781909 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 from cache I0930 13:41:30.781936 49013 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13 I0930 13:41:30.781945 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13 I0930 13:41:31.780436 49013 ssh_runner.go:189] storage-provisioner_v1.8.1: copied 20683776 bytes I0930 13:41:32.655816 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 from cache I0930 13:41:32.655892 49013 docker.go:97] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1 I0930 13:41:32.655921 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1 I0930 13:41:34.642251 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache I0930 13:41:36.628416 49013 ssh_runner.go:189] kube-addon-manager_v9.0.2: copied 30519808 bytes I0930 13:41:36.694062 49013 docker.go:97] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0.2 I0930 13:41:36.694091 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0.2 I0930 13:41:36.709836 49013 ssh_runner.go:189] kube-proxy_v1.16.0: copied 30888448 bytes I0930 13:41:36.781129 49013 ssh_runner.go:189] kube-scheduler_v1.16.0: copied 31408640 bytes I0930 13:41:38.330288 49013 ssh_runner.go:189] dashboard_v2.0.0-beta4: copied 35855360 bytes I0930 13:41:39.331938 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 from cache I0930 13:41:39.331980 49013 docker.go:97] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.0 I0930 13:41:39.331998 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/kube-proxy_v1.16.0 I0930 13:41:41.451928 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 from cache I0930 13:41:41.451957 49013 docker.go:97] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.0 I0930 13:41:41.451968 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.0 I0930 13:41:41.634540 49013 ssh_runner.go:189] kube-controller-manager_v1.16.0: copied 48862720 bytes I0930 13:41:41.775619 49013 ssh_runner.go:189] kube-apiserver_v1.16.0: copied 50498560 bytes I0930 13:41:44.217970 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 from cache I0930 13:41:44.218017 49013 docker.go:97] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-beta4 I0930 13:41:44.218030 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/dashboard_v2.0.0-beta4 I0930 13:41:44.903861 49013 ssh_runner.go:189] etcd_3.3.15-0: copied 85501440 bytes I0930 13:41:45.045640 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 from cache I0930 13:41:45.045671 49013 docker.go:97] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.0 I0930 13:41:45.045689 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.0 I0930 13:41:45.401145 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 from cache I0930 13:41:45.401191 49013 docker.go:97] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.0 I0930 13:41:45.401205 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.0 I0930 13:41:45.679965 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 from cache I0930 13:41:45.679995 49013 docker.go:97] Loading image: /var/lib/minikube/images/etcd_3.3.15-0 I0930 13:41:45.680004 49013 ssh_runner.go:102] SSH: docker load -i /var/lib/minikube/images/etcd_3.3.15-0 I0930 13:41:45.993349 49013 cache_images.go:236] Successfully loaded image /Users/limartin/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 from cache I0930 13:41:45.993405 49013 cache_images.go:119] Successfully loaded all cached images. I0930 13:41:45.993425 49013 cache_images.go:120] LoadImages end I0930 13:41:45.993678 49013 kubeadm.go:610] kubelet v1.16.0 config: [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests [Install] I0930 13:41:45.993692 49013 ssh_runner.go:102] SSH: pgrep kubelet && sudo systemctl stop kubelet W0930 13:41:46.001343 49013 kubeadm.go:615] unable to stop kubelet: command failed: pgrep kubelet && sudo systemctl stop kubelet stdout: stderr: : Process exited with status 1 I0930 13:41:46.002456 49013 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet I0930 13:41:46.002683 49013 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm I0930 13:41:46.003379 49013 ssh_runner.go:170] Transferring 44244800 bytes to /var/lib/minikube/binaries/v1.16.0/kubeadm I0930 13:41:46.003385 49013 ssh_runner.go:170] Transferring 123120976 bytes to /var/lib/minikube/binaries/v1.16.0/kubelet I0930 13:41:50.814940 49013 ssh_runner.go:189] kubeadm: copied 44244800 bytes I0930 13:41:54.837243 49013 ssh_runner.go:189] kubelet: copied 123120976 bytes I0930 13:41:54.886539 49013 ssh_runner.go:170] Transferring 1146 bytes to /var/tmp/minikube/kubeadm.yaml I0930 13:41:54.887791 49013 ssh_runner.go:189] kubeadm.yaml: copied 1146 bytes I0930 13:41:54.907990 49013 ssh_runner.go:170] Transferring 498 bytes to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I0930 13:41:54.910392 49013 ssh_runner.go:189] 10-kubeadm.conf: copied 498 bytes I0930 13:41:54.929572 49013 ssh_runner.go:170] Transferring 349 bytes to /lib/systemd/system/kubelet.service I0930 13:41:54.930792 49013 ssh_runner.go:189] kubelet.service: copied 349 bytes I0930 13:41:54.944348 49013 ssh_runner.go:170] Transferring 1709 bytes to /etc/kubernetes/addons/storage-provisioner.yaml I0930 13:41:54.945477 49013 ssh_runner.go:189] storage-provisioner.yaml: copied 1709 bytes I0930 13:41:54.967033 49013 ssh_runner.go:170] Transferring 271 bytes to /etc/kubernetes/addons/storageclass.yaml I0930 13:41:54.968375 49013 ssh_runner.go:189] storageclass.yaml: copied 271 bytes I0930 13:41:54.980401 49013 ssh_runner.go:170] Transferring 1532 bytes to /etc/kubernetes/manifests/addon-manager.yaml.tmpl I0930 13:41:54.981361 49013 ssh_runner.go:189] addon-manager.yaml.tmpl: copied 1532 bytes I0930 13:41:54.993471 49013 ssh_runner.go:102] SSH: sudo systemctl daemon-reload && sudo systemctl start kubelet I0930 13:41:55.096036 49013 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:} I0930 13:41:55.096918 49013 certs.go:79] Setting up /Users/limartin/.minikube for IP: 192.168.99.105 I0930 13:41:55.097570 49013 crypto.go:69] Generating cert /Users/limartin/.minikube/client.crt with IP's: [] I0930 13:41:55.102687 49013 crypto.go:157] Writing cert to /Users/limartin/.minikube/client.crt ... I0930 13:41:55.102716 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/client.crt" with filemode -rw-r--r-- I0930 13:41:55.103364 49013 crypto.go:165] Writing key to /Users/limartin/.minikube/client.key ... I0930 13:41:55.103393 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/client.key" with filemode -rw------- I0930 13:41:55.103938 49013 crypto.go:69] Generating cert /Users/limartin/.minikube/apiserver.crt with IP's: [192.168.99.105 10.96.0.1 10.0.0.1] I0930 13:41:55.110859 49013 crypto.go:157] Writing cert to /Users/limartin/.minikube/apiserver.crt ... I0930 13:41:55.110892 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/apiserver.crt" with filemode -rw-r--r-- I0930 13:41:55.111616 49013 crypto.go:165] Writing key to /Users/limartin/.minikube/apiserver.key ... I0930 13:41:55.111679 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/apiserver.key" with filemode -rw------- I0930 13:41:55.112164 49013 crypto.go:69] Generating cert /Users/limartin/.minikube/proxy-client.crt with IP's: [] I0930 13:41:55.117990 49013 crypto.go:157] Writing cert to /Users/limartin/.minikube/proxy-client.crt ... I0930 13:41:55.118018 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/proxy-client.crt" with filemode -rw-r--r-- I0930 13:41:55.118637 49013 crypto.go:165] Writing key to /Users/limartin/.minikube/proxy-client.key ... I0930 13:41:55.118655 49013 lock.go:41] attempting to write to file "/Users/limartin/.minikube/proxy-client.key" with filemode -rw------- I0930 13:41:55.121442 49013 ssh_runner.go:170] Transferring 1066 bytes to /var/lib/minikube/certs/ca.crt I0930 13:41:55.122514 49013 ssh_runner.go:189] ca.crt: copied 1066 bytes I0930 13:41:55.181715 49013 ssh_runner.go:170] Transferring 1679 bytes to /var/lib/minikube/certs/ca.key I0930 13:41:55.182872 49013 ssh_runner.go:189] ca.key: copied 1679 bytes I0930 13:41:55.200416 49013 ssh_runner.go:170] Transferring 1298 bytes to /var/lib/minikube/certs/apiserver.crt I0930 13:41:55.201267 49013 ssh_runner.go:189] apiserver.crt: copied 1298 bytes I0930 13:41:55.221547 49013 ssh_runner.go:170] Transferring 1675 bytes to /var/lib/minikube/certs/apiserver.key I0930 13:41:55.222600 49013 ssh_runner.go:189] apiserver.key: copied 1675 bytes I0930 13:41:55.257942 49013 ssh_runner.go:170] Transferring 1074 bytes to /var/lib/minikube/certs/proxy-client-ca.crt I0930 13:41:55.260680 49013 ssh_runner.go:189] proxy-client-ca.crt: copied 1074 bytes I0930 13:41:55.288665 49013 ssh_runner.go:170] Transferring 1679 bytes to /var/lib/minikube/certs/proxy-client-ca.key I0930 13:41:55.289915 49013 ssh_runner.go:189] proxy-client-ca.key: copied 1679 bytes I0930 13:41:55.316810 49013 ssh_runner.go:170] Transferring 1103 bytes to /var/lib/minikube/certs/proxy-client.crt I0930 13:41:55.317791 49013 ssh_runner.go:189] proxy-client.crt: copied 1103 bytes I0930 13:41:55.336269 49013 ssh_runner.go:170] Transferring 1679 bytes to /var/lib/minikube/certs/proxy-client.key I0930 13:41:55.337857 49013 ssh_runner.go:189] proxy-client.key: copied 1679 bytes I0930 13:41:55.361844 49013 ssh_runner.go:170] Transferring 1066 bytes to /usr/share/ca-certificates/minikubeCA.pem I0930 13:41:55.362636 49013 ssh_runner.go:189] minikubeCA.pem: copied 1066 bytes I0930 13:41:55.374732 49013 ssh_runner.go:170] Transferring 428 bytes to /var/lib/minikube/kubeconfig I0930 13:41:55.375492 49013 ssh_runner.go:189] kubeconfig: copied 428 bytes I0930 13:41:55.395997 49013 ssh_runner.go:102] SSH: which openssl I0930 13:41:55.404809 49013 ssh_runner.go:102] SSH: sudo test -f '/etc/ssl/certs/minikubeCA.pem' I0930 13:41:55.413445 49013 ssh_runner.go:102] SSH: sudo ln -s '/usr/share/ca-certificates/minikubeCA.pem' '/etc/ssl/certs/minikubeCA.pem' I0930 13:41:55.424683 49013 ssh_runner.go:138] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem' I0930 13:41:55.449415 49013 ssh_runner.go:102] SSH: sudo test -f '/etc/ssl/certs/b5213941.0' I0930 13:41:55.456527 49013 ssh_runner.go:102] SSH: sudo ln -s '/etc/ssl/certs/minikubeCA.pem' '/etc/ssl/certs/b5213941.0' * Relaunching Kubernetes using kubeadm ... I0930 13:41:55.463984 49013 kubeadm.go:396] RestartCluster start I0930 13:41:55.464006 49013 ssh_runner.go:102] SSH: sudo test -d /data/minikube I0930 13:41:55.470987 49013 kubeadm.go:216] /data/minikube check failed, skipping compat symlinks: command failed: sudo test -d /data/minikube stdout: stderr: : Process exited with status 1 I0930 13:41:55.471037 49013 ssh_runner.go:102] SSH: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml I0930 13:41:55.559771 49013 ssh_runner.go:102] SSH: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml I0930 13:41:57.005078 49013 ssh_runner.go:102] SSH: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml I0930 13:41:57.113028 49013 ssh_runner.go:102] SSH: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml I0930 13:41:57.210705 49013 kubeadm.go:454] Waiting for apiserver process ... I0930 13:41:57.210758 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:57.220061 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:57.522938 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:57.534560 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:57.824950 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:57.833589 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:58.120899 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:58.130429 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:58.424849 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:58.433516 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:58.725394 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:58.733835 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:59.020499 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:59.031596 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:59.321867 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:59.331266 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:59.623225 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:59.637068 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:41:59.921987 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:41:59.932563 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:00.223647 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:00.235485 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:00.520327 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:00.533520 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:00.822367 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:00.833106 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:01.124546 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:01.138933 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:01.422024 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:01.432461 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:01.723154 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:01.739938 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:02.020359 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:02.031544 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:02.320996 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:02.332515 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:02.624084 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:02.635977 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:02.922406 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:02.951704 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:03.223249 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:03.274255 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:03.523661 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:03.552727 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:03.824264 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:03.841880 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:04.123798 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver W0930 13:42:04.187035 49013 kubeadm.go:460] pgrep apiserver: command failed: sudo pgrep kube-apiserver stdout: stderr: : Process exited with status 1 I0930 13:42:04.425526 49013 ssh_runner.go:102] SSH: sudo pgrep kube-apiserver I0930 13:42:04.456767 49013 kubeadm.go:469] Waiting for apiserver to port healthy status ... I0930 13:43:20.251202 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:43:20.251253 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:44:36.133861 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:44:36.133905 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:45:51.668817 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:45:51.668864 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:47:07.367041 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:47:07.367080 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:48:23.242904 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:48:23.242943 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:49:38.685779 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:49:38.685809 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:50:54.420178 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:50:54.420220 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:52:10.249722 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:52:10.249767 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:53:25.817622 49013 kubeadm.go:156] https://192.168.99.105:8443/healthz response: Get https://192.168.99.105:8443/healthz: dial tcp 192.168.99.105:8443: connect: operation timed out I0930 13:53:25.817655 49013 kubeadm.go:472] apiserver status: Stopped, err: I0930 13:53:25.817667 49013 kubeadm.go:451] duration metric: took 11m28.586318512s to wait for apiserver status ... I0930 13:53:25.817685 49013 kubeadm.go:399] RestartCluster took 11m30.332981521s I0930 13:53:25.817933 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kube-apiserver" --format="{{.ID}}" I0930 13:53:25.853421 49013 logs.go:160] 2 containers: [e7d00f428177 52fa0bbf5a56] I0930 13:53:25.853463 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_coredns" --format="{{.ID}}" I0930 13:53:25.891895 49013 logs.go:160] 4 containers: [e1471674fb70 f1af9d83870f aa81ae3e0852 ce4876af51d3] I0930 13:53:25.891938 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kube-scheduler" --format="{{.ID}}" I0930 13:53:25.934012 49013 logs.go:160] 2 containers: [505490bf5336 23db14f6e60f] I0930 13:53:25.934050 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kube-proxy" --format="{{.ID}}" I0930 13:53:26.010337 49013 logs.go:160] 2 containers: [8da07a52955c b230d84a1477] I0930 13:53:26.010398 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kube-addon-manager" --format="{{.ID}}" I0930 13:53:26.059066 49013 logs.go:160] 2 containers: [9f6e60fde582 8550cdbffaf0] I0930 13:53:26.059120 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kubernetes-dashboard" --format="{{.ID}}" I0930 13:53:26.105286 49013 logs.go:160] 0 containers: [] W0930 13:53:26.105315 49013 logs.go:162] No container was found matching "kubernetes-dashboard" I0930 13:53:26.105329 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_storage-provisioner" --format="{{.ID}}" I0930 13:53:26.158251 49013 logs.go:160] 2 containers: [e9b0a3d4a08b b5a0a32a549f] I0930 13:53:26.158312 49013 ssh_runner.go:138] Run with output: docker ps -a --filter="name=k8s_kube-controller-manager" --format="{{.ID}}" I0930 13:53:26.243295 49013 logs.go:160] 1 containers: [19b0fda75153] I0930 13:53:26.243330 49013 logs.go:78] Gathering logs for kube-scheduler [23db14f6e60f] ... I0930 13:53:26.243339 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 23db14f6e60f I0930 13:53:26.304530 49013 logs.go:78] Gathering logs for kube-proxy [b230d84a1477] ... I0930 13:53:26.304560 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 b230d84a1477 I0930 13:53:26.357585 49013 logs.go:78] Gathering logs for container status ... I0930 13:53:26.357611 49013 ssh_runner.go:138] Run with output: sudo crictl ps -a || sudo docker ps -a I0930 13:53:26.381771 49013 logs.go:78] Gathering logs for coredns [e1471674fb70] ... I0930 13:53:26.381798 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 e1471674fb70 I0930 13:53:26.451038 49013 logs.go:78] Gathering logs for coredns [aa81ae3e0852] ... I0930 13:53:26.451060 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 aa81ae3e0852 I0930 13:53:26.494445 49013 logs.go:78] Gathering logs for kube-scheduler [505490bf5336] ... I0930 13:53:26.494483 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 505490bf5336 I0930 13:53:26.534128 49013 logs.go:78] Gathering logs for kube-addon-manager [9f6e60fde582] ... I0930 13:53:26.534159 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 9f6e60fde582 I0930 13:53:26.586022 49013 logs.go:78] Gathering logs for storage-provisioner [e9b0a3d4a08b] ... I0930 13:53:26.586052 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 e9b0a3d4a08b I0930 13:53:26.632849 49013 logs.go:78] Gathering logs for storage-provisioner [b5a0a32a549f] ... I0930 13:53:26.632888 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 b5a0a32a549f I0930 13:53:26.668653 49013 logs.go:78] Gathering logs for kube-controller-manager [19b0fda75153] ... I0930 13:53:26.668680 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 19b0fda75153 I0930 13:53:26.718508 49013 logs.go:78] Gathering logs for kubelet ... I0930 13:53:26.718538 49013 ssh_runner.go:138] Run with output: journalctl -u kubelet -n 200 I0930 13:53:26.745884 49013 logs.go:78] Gathering logs for dmesg ... I0930 13:53:26.745925 49013 ssh_runner.go:138] Run with output: sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 200 I0930 13:53:26.758413 49013 logs.go:78] Gathering logs for kube-apiserver [e7d00f428177] ... I0930 13:53:26.758441 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 e7d00f428177 I0930 13:53:26.805368 49013 logs.go:78] Gathering logs for coredns [f1af9d83870f] ... I0930 13:53:26.805391 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 f1af9d83870f I0930 13:53:26.839665 49013 logs.go:78] Gathering logs for Docker ... I0930 13:53:26.839684 49013 ssh_runner.go:138] Run with output: sudo journalctl -u docker -n 200 I0930 13:53:26.856221 49013 logs.go:78] Gathering logs for kube-apiserver [52fa0bbf5a56] ... I0930 13:53:26.856250 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 52fa0bbf5a56 I0930 13:53:26.908693 49013 logs.go:78] Gathering logs for coredns [ce4876af51d3] ... I0930 13:53:26.908716 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 ce4876af51d3 I0930 13:53:26.965182 49013 logs.go:78] Gathering logs for kube-proxy [8da07a52955c] ... I0930 13:53:26.965216 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 8da07a52955c I0930 13:53:27.022408 49013 logs.go:78] Gathering logs for kube-addon-manager [8550cdbffaf0] ... I0930 13:53:27.022427 49013 ssh_runner.go:138] Run with output: docker logs --tail 200 8550cdbffaf0 W0930 13:53:27.089743 49013 logs.go:90] Found kube-addon-manager [8550cdbffaf0] problem: error: nINFOo o: == bjecRects opnacsilisned gto wi tahp addply W0930 13:53:27.090763 49013 logs.go:90] Found kube-addon-manager [8550cdbffaf0] problem: error: no objaects ccoupnt/storage-provassedisione r unto apchanpgedl W0930 13:53:27.093253 49013 exit.go:101] Error restarting cluster: waiting for apiserver: timed out waiting for the condition * X Error restarting cluster: waiting for apiserver: timed out waiting for the condition * * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new/choose * Problems detected in kube-addon-manager [8550cdbffaf0]: - error: nINFOo o: == bjecRects opnacsilisned gto wi tahp addply - error: no objaects ccoupnt/storage-provassedisione r unto apchanpgedl

minikube status

host: Running kubelet: Running apiserver: Stopped kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.105

minikube logs

* ==> Docker <== * -- Logs begin at Mon 2019-09-30 20:41:12 UTC, end at Mon 2019-09-30 20:56:24 UTC. -- * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.962298487Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 }]" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.962356520Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.962533976Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000136660, CONNECTING" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.962656734Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.962685062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.963011247Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000136660, READY" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.963341826Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 }]" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.963551114Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.963613870Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000136960, CONNECTING" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.963794882Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000136960, READY" module=grpc * Sep 30 20:41:16 minikube dockerd[2346]: time="2019-09-30T20:41:16.993003622Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.077988326Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078301486Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078338324Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078347562Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078354992Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078364402Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.078372178Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.080469744Z" level=info msg="Loading containers: start." * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.442089282Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.555926628Z" level=info msg="Loading containers: done." * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.574573066Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.575370616Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.592929068Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9 * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.594055928Z" level=info msg="Daemon has completed initialization" * Sep 30 20:41:17 minikube systemd[1]: Started Docker Application Container Engine. * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.620761420Z" level=info msg="API listen on /var/run/docker.sock" * Sep 30 20:41:17 minikube dockerd[2346]: time="2019-09-30T20:41:17.620875188Z" level=info msg="API listen on [::]:2376" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.265505754Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.267732198Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.292354814Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.293685808Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.636089184Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.647897740Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.657373612Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:41:56 minikube dockerd[2346]: time="2019-09-30T20:41:56.658939097Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:42:01 minikube dockerd[2346]: time="2019-09-30T20:42:01.780404622Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" * Sep 30 20:42:01 minikube dockerd[2346]: time="2019-09-30T20:42:01.783057963Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" * Sep 30 20:42:02 minikube dockerd[2346]: time="2019-09-30T20:42:02.612900405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1371dc5e9230040da7fc53e83cbedeafce364f804c8c284102e42c13819ac5cf/shim.sock" debug=false pid=2936 * Sep 30 20:42:02 minikube dockerd[2346]: time="2019-09-30T20:42:02.635136705Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/316b7b3770e0fc391b12aad769c82acd59fbf9016d434472d6e9e3a94d5c0fb8/shim.sock" debug=false pid=2947 * Sep 30 20:42:02 minikube dockerd[2346]: time="2019-09-30T20:42:02.636027693Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/835de6a43147ec849e8d19b5bf16ed6e5935a1b1a1c98676f8d067dba0826f07/shim.sock" debug=false pid=2950 * Sep 30 20:42:02 minikube dockerd[2346]: time="2019-09-30T20:42:02.636345421Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/182883413ffe16aef72cd3c85c96c50f00f8ab3f167e9885ee31c4c0cd799baf/shim.sock" debug=false pid=2951 * Sep 30 20:42:02 minikube dockerd[2346]: time="2019-09-30T20:42:02.650013839Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ffc8c5e644558262b2a002caa768aeb84bd3e99aef33d810da7289413ce01aca/shim.sock" debug=false pid=2978 * Sep 30 20:42:03 minikube dockerd[2346]: time="2019-09-30T20:42:03.073826357Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e7d00f4281776ce72c3b8036446fd57d027a28cd39cb08b987cfcdda17d93ef6/shim.sock" debug=false pid=3157 * Sep 30 20:42:03 minikube dockerd[2346]: time="2019-09-30T20:42:03.316477125Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/19b0fda75153ad6b8f3caf1f0db34f39f12a14feb032c1ee54a8a186959ea218/shim.sock" debug=false pid=3186 * Sep 30 20:42:03 minikube dockerd[2346]: time="2019-09-30T20:42:03.408057442Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/505490bf5336a47e82d39fc046742b0593373904b008d70ee158656e70954825/shim.sock" debug=false pid=3205 * Sep 30 20:42:04 minikube dockerd[2346]: time="2019-09-30T20:42:04.940349154Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9f6e60fde58263202cdbe5ae4b8dd487cb35f2aeb53b9ffe51b8143c5607d086/shim.sock" debug=false pid=3308 * Sep 30 20:42:05 minikube dockerd[2346]: time="2019-09-30T20:42:05.224963322Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8651d568c230e1344d14c5c9a9336fb94cbff8e198f5430e2af546538611cab7/shim.sock" debug=false pid=3334 * Sep 30 20:42:14 minikube dockerd[2346]: time="2019-09-30T20:42:14.763461748Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c8e42b515e5a0647911e671281a17b15b754cfdaf66080ee4f81c5154caf2e79/shim.sock" debug=false pid=3474 * Sep 30 20:42:14 minikube dockerd[2346]: time="2019-09-30T20:42:14.766754843Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e1012b76faa93c9c0997002287421e82df16bc004eba9a4845df22654bfde4a0/shim.sock" debug=false pid=3478 * Sep 30 20:42:15 minikube dockerd[2346]: time="2019-09-30T20:42:15.048154119Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/321a789d155e401b663854a931d8f91dc8846881661ad0f64566ede44633be68/shim.sock" debug=false pid=3563 * Sep 30 20:42:15 minikube dockerd[2346]: time="2019-09-30T20:42:15.192108259Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5a0a32a549f1612824d681eb1c144748b2cec01295a7d4ff3819fd2171e2999/shim.sock" debug=false pid=3595 * Sep 30 20:42:15 minikube dockerd[2346]: time="2019-09-30T20:42:15.394021661Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f3947b7d6fea3b434fe42bc41ca3882db0eb094f728c7e46669dd3a4573ab114/shim.sock" debug=false pid=3632 * Sep 30 20:42:15 minikube dockerd[2346]: time="2019-09-30T20:42:15.941304278Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f1af9d83870ff70cb9e6d502d2c881474d9cb854c6fd9da5f055e850152d0d97/shim.sock" debug=false pid=3748 * Sep 30 20:42:16 minikube dockerd[2346]: time="2019-09-30T20:42:16.051396682Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8da07a52955c6a610044def737bdeff2048a2f7f0759c1776ce03e42d98b2c8d/shim.sock" debug=false pid=3766 * Sep 30 20:42:16 minikube dockerd[2346]: time="2019-09-30T20:42:16.105182858Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e1471674fb70e160a24167c95943816e58da16262202d6ae0517155085e0b33b/shim.sock" debug=false pid=3788 * Sep 30 20:42:46 minikube dockerd[2346]: time="2019-09-30T20:42:46.804642683Z" level=info msg="shim reaped" id=b5a0a32a549f1612824d681eb1c144748b2cec01295a7d4ff3819fd2171e2999 * Sep 30 20:42:46 minikube dockerd[2346]: time="2019-09-30T20:42:46.821065755Z" level=warning msg="b5a0a32a549f1612824d681eb1c144748b2cec01295a7d4ff3819fd2171e2999 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b5a0a32a549f1612824d681eb1c144748b2cec01295a7d4ff3819fd2171e2999/mounts/shm, flags: 0x2: no such file or directory" * Sep 30 20:42:46 minikube dockerd[2346]: time="2019-09-30T20:42:46.821422480Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Sep 30 20:43:01 minikube dockerd[2346]: time="2019-09-30T20:43:01.860611550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e9b0a3d4a08b1ab32b4204c18a21281e937c0ba1cc4424af43010be41606afdc/shim.sock" debug=false pid=4348 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * e9b0a3d4a08b1 4689081edb103 13 minutes ago Running storage-provisioner 2 e1012b76faa93 * e1471674fb70e bf261d1579144 14 minutes ago Running coredns 1 321a789d155e4 * 8da07a52955c6 c21b0c7400f98 14 minutes ago Running kube-proxy 1 f3947b7d6fea3 * f1af9d83870ff bf261d1579144 14 minutes ago Running coredns 1 c8e42b515e5a0 * b5a0a32a549f1 4689081edb103 14 minutes ago Exited storage-provisioner 1 e1012b76faa93 * 8651d568c230e b2756210eeabf 14 minutes ago Running etcd 1 835de6a43147e * 9f6e60fde5826 bd12a212f9dcb 14 minutes ago Running kube-addon-manager 1 182883413ffe1 * 19b0fda75153a 06a629a7e51cd 14 minutes ago Running kube-controller-manager 0 ffc8c5e644558 * 505490bf5336a 301ddc62b80b1 14 minutes ago Running kube-scheduler 1 316b7b3770e0f * e7d00f4281776 b305571ca60a5 14 minutes ago Running kube-apiserver 1 1371dc5e92300 * aa81ae3e08523 bf261d1579144 3 days ago Exited coredns 0 46c9425381dcb * ce4876af51d3b bf261d1579144 3 days ago Exited coredns 0 5b25fbb010062 * b230d84a1477a c21b0c7400f98 3 days ago Exited kube-proxy 0 951d4df0467f6 * 9ef7d1ffdf8b5 b2756210eeabf 3 days ago Exited etcd 0 953be7d908dd4 * 8550cdbffaf06 bd12a212f9dcb 3 days ago Exited kube-addon-manager 0 943f612ab2dbd * 52fa0bbf5a563 b305571ca60a5 3 days ago Exited kube-apiserver 0 49a81bd5654cc * 23db14f6e60f8 301ddc62b80b1 3 days ago Exited kube-scheduler 0 bc03db694480f * * ==> coredns [aa81ae3e0852] <== * E0927 19:16:57.729049 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.729861 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/s.ervic:53 * es?limi2019-0t=5009-27T&re19:16sour:32.ceVe774Z r[INFsOi]o pln=0: dial tcp 10.96.0.1ugin/:443r:eloa i/od: R timunnineoug ct * onfE0927igura t19:16:ion M5D7.75 =3 0f646c13 b9b9 77c 7 1 redfcafle5c8c4tfor.gab10o:182535a6] 7p6k * g/mo2d019-/k8s09-.2io7/cliTe19:16:3nt-go2.774@v1Z [I1N.F0.0+Oi]nco ComrepDaNtibS-1.l6.2e * /too2019-l0s9/ca-27Tche/19:refl16:ector32..go774Z:94 :[INFO Fail] lied tonux/ liamd64st , go*1.1v2.1.Na8,m e7s95apac3ebe: * Get hCoreDttpNS-s1.6.://102 * .9linux/amd64,6.0.1 :44g3o1.1/api/2.8v1/namespa, 79ces?5a3limiteb * =5002019-&re09-2s7ourT19:c16:eVe33.r0sio6n6Z =[IN0F:O] p diallugin/read tcpy: S t1i0l.l96.0. waiting on: "k1:4uber43: neties" * /o 2t0ime19-o09-27ut * T19:16:43.066Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-27T19:16:53.066Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I0927 19:16:57.729025 1 trace.go:82] Trace[2033494324]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.728567402 +0000 UTC m=+0.838121069) (total time: 30.000429112s): * Trace[2033494324]: [30.000429112s] [30.000429112s] END * E0927 19:16:57.729049 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.729049 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.729049 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0927 19:16:57.729848 1 trace.go:82] Trace[904008340]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.729439282 +0000 UTC m=+0.838992975) (total time: 30.000390156s): * Trace[904008340]: [30.000390156s] [30.000390156s] END * E0927 19:16:57.729861 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.729861 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.729861 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0927 19:16:57.730592 1 trace.go:82] Trace[1019103084]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.72812244 +0000 UTC m=+0.837676125) (total time: 30.002452856s): * Trace[1019103084]: [30.002452856s] [30.002452856s] END * E0927 19:16:57.730613 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.730613 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.730613 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * [INFO] SIGTERM: Shutting down servers then terminating * * ==> coredns [ce4876af51d3] <== * .:53 * 2019-09-27T19:16:32.608Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-09-27T19:16:32.608Z [INFO] CoreDNS-1.6.2 * 2019-09-27T19:16:32.608Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-09-27T19:16:33.674Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-27T19:16:43.673Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-27T19:16:53.674Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I0927 19:16:57.720706 1 trace.go:82] Trace[1492173723]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.621933278 +0000 UTC m=+0.769731837) (total time: 30.098743451s): * Trace[1492173723]: [30.098743451s] [30.098743451s] END * E0927 19:16:57.720746 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.721337 1 reflector.go:126] pkg/E0927 19:16:57.720746 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to lismod/k8s.io/client-go@v11.0.0+incompatible/tot ols/*cachv1.Ne/reamesflectorpace:.go :9Get h4: Failedttps://1 t0.96.o 0.li1st *v1:.443/apSeri/vice: Get vhtt1ps:///nam1espace0.96s?limi.0.1t=500:4&resourc4eVers3/apion=0i: dia/v1l tcp 10.96/serv.0.1:ices?limit4=5043:0&re sourci/o eVerstimeion=0ou: dialt * tcp 10E0927 .9619:1.0.1:6:57.443:72 i/o0746 timeout * E091 ref27 l19:ect16:57.7241or.go44 1:126 refl]e pctor.kgg/mod/k8o:126]s.i opkg/m/cliod/k8ent-gs.o@iv11o.0/clie.n0+inct-go@ompatvibl11.0e.0+in/tocolsomp/caachetib/rleefle/cttorool.go:9s/cac4: Fahei/refledlect toor.go:94: Faileld to list ist **v1.v1.EnNdamespointpaces: Ge: Gt httet hps://t10.96tps:/.0.1:4/10.9436./ap0.1i/v1/:443endpo/intapi/vs1/na?mespalimit=ces?l500&riesmitourc=50e0&rVerseion=0sour: diaceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.720746 l tcp 1 re1flector0.96..go0.:1261:4] p43kg/mod/k8s: i.io/c/liento-go @v1timeo1.0.ut * 0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0927 19:16:57.721193 1 trace.go:82] Trace[849749213]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.72027355 +0000 UTC m=+0.868072125) (total time: 30.000824676s): * Trace[849749213]: [30.000824676s] [30.000824676s] END * E0927 19:16:57.721337 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.721337 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.721337 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0927 19:16:57.724114 1 trace.go:82] Trace[1094139474]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-27 19:16:27.723289706 +0000 UTC m=+0.871088297) (total time: 30.000801989s): * Trace[1094139474]: [30.000801989s] [30.000801989s] END * E0927 19:16:57.724144 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.724144 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0927 19:16:57.724144 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * [INFO] SIGTERM: Shutting down servers then terminating * * ==> coredns [e1471674fb70] <== * 2019-09-30T20:42:17.543Z [INFO] plugin/ready: Still waiting on: "kubernetes" * .:53 * 2019-09-30T20:42:22.193Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-09-30T20:42:22.193Z [INFO] CoreDNS-1.6.2 * 2019-09-30T20:42:22.193Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-09-30T20:42:27.543Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-30T20:42:37.545Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I0930 20:42:47.191700 1 trace.go:82] Trace[1144398645]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20:42:17.190055886 +0000 UTC m=+0.355135084) (total time: 30.001557822s): * Trace[1144398645]: [30.001557822s] [30.001557822s] END * E0930 20:42:47.191746 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namE0930 20:42:47.191746 1 reflectespaceos?limit=500&resourceVersiron.go:=0: d126] ial ptkcp g/m1o0d./k896.0s.i.1:o4/4c3l: i/oient- timgo@ve11ou.0.t0+ * incomE0930patib 20:l4e2:4/to7.19ols/c1746a c he 1 /refrefllecetcotror..gogo:1:9426] : Fapkg/imloedd/ k8sto li.iost */cliev1.Nnat-gomesp@v1ace:1.0. 0G+eitn chttompaps:tibl/e//10tool.96.s/ca0.1che/r:443eflec/aptor.gi/v1/o:9n4:a mesFailpaceesd ?lito lmit=ist 500&*v1r.NaesoumesrcepaceV: ersGeti on=0:h dttpis:a/l/ 1tcp 0.9610..0.196.:4430/.a1p:443i/v: i1/na/meo tispacmeoes?luimitt * =500&reE0930sou 20:rceV42:4ers7io.n=01: d9307ial 3tcp 10.96 .0.11 r:44efle3: ic/o tor.timego:12out * 6] E093pkg/0 20mod:4/k82:4s.i7.191o/cl746ien t - g1o re@v11flec.tor0..0+igo:1nco26] mpatipbkg/le/tmod/oolsk8s./iocac/clihe/rent-egfleco@vtor11.0..go:0+i9n4: Fcompaileatibd tle/too liols/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0930 20:42:47.192941 1 trace.go:82] Trace[196215325]: "Reflector pkg/mod/k8s.io/cliesntt *v1-go@v.Ser11.0.0vice+inco: Gmet htpatitps:ble/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20//10.96:42:1.0.1:7.192443/api/v1/1s0erv7ices?554limit=500&re +00sourc00 eVeUTC m=+0.357186rsio7n=0: 5dial tcp 210.96.)0. 1:(443t: oi/tal time: 30.000803498s): * Trace[196215325]: [30.000803498s] [30.000803498s] END * E0930 20:42:47.193073 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://o tim1eo0.96.0.1:4ut * 43/api/v1/servicE0930 20e:42:4s7.?194l899 i m1 itre=fl500&rector.go:126eso] purceVkg/mode/k8s.rsio/clieinton-go=@v110.0.0+i: diancol tcpm pati10.96ble/tool.0.1:s/ca4che/43: i/o tirmeflecetor.go:94:out Fai * led to lE093ist *v1.En0 dp20:4oin2:47.ts: Get ht193073 tps :/ /10. 196. 0.1reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * :443E093/ap0 20:42i/v:1/end4poi7nt.1930s?73 lim it1 ref=500&lectoresr.go:ourceV126ersion] pkg=0: dial tc/p 10.mo9d6.0./1:443k: i/o 8s.timio/clieout * ent-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0930 20:42:47.194781 1 trace.go:82] Trace[1243867389]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20:42:17.192696796 +0000 UTC m=+0.357776002) (total time: 30.002056092s): * Trace[1243867389]: [30.002056092s] [30.002056092s] END * E0930 20:42:47.194899 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.194899 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.194899 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * 2019-09-30T20:42:47.547Z [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> coredns [f1af9d83870f] <== * .:53 * 2019-09-30T20:42:22.187Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 * 2019-09-30T20:42:22.187Z [INFO] CoreDNS-1.6.2 * 2019-09-30T20:42:22.187Z [INFO] linux/amd64, go1.12.8, 795a3eb * CoreDNS-1.6.2 * linux/amd64, go1.12.8, 795a3eb * 2019-09-30T20:42:25.746Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-30T20:42:35.746Z [INFO] plugin/ready: Still waiting on: "kubernetes" * 2019-09-30T20:42:45.747Z [INFO] plugin/ready: Still waiting on: "kubernetes" * I0930 20:42:47.168457 1 trace.go:82] Trace[459677920]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20:42:17.167711958 +0000 UTC m=+0.332810911) (total time: 30.000623612s): * Trace[459677920]: [30.000623612s] [30.000623612s] END * E0930 20:42:47.168790 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.168790 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.168790 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0930 20:42:47.169787 1 trace.go:82] Trace[1459928052]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20:42:17.16754084 +0000 UTC m=+0.332639821) (total time: 30.00221886s): * Trace[1459928052]: [30.00221886s] [30.00221886s] END * E0930 20:42:47.169919 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.169919 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.169919 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0930 20:42:47.172543 1 trace.go:82] Trace[1983967578]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2019-09-30 20:42:17.165270508 +0000 UTC m=+0.330369493) (total time: 30.007241836s): * Trace[1983967578]: [30.007241836s] [30.007241836s] END * E0930 20:42:47.173007 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.173007 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.173007 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.168790 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.169919 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * E0930 20:42:47.173007 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * * ==> dmesg <== * [ +5.001626] hpet1: lost 318 rtc interrupts * [ +5.001821] hpet1: lost 318 rtc interrupts * [ +5.001557] hpet1: lost 318 rtc interrupts * [ +5.003003] hpet1: lost 318 rtc interrupts * [ +4.999781] hpet1: lost 318 rtc interrupts * [ +5.003184] hpet1: lost 319 rtc interrupts * [ +5.000771] hpet1: lost 318 rtc interrupts * [Sep30 20:52] hpet1: lost 318 rtc interrupts * [ +5.002039] hpet1: lost 318 rtc interrupts * [ +5.001445] hpet1: lost 319 rtc interrupts * [ +5.001683] hpet1: lost 319 rtc interrupts * [ +5.002979] hpet1: lost 318 rtc interrupts * [ +5.000628] hpet1: lost 319 rtc interrupts * [ +5.001353] hpet1: lost 318 rtc interrupts * [ +5.001557] hpet1: lost 319 rtc interrupts * [ +5.001212] hpet1: lost 318 rtc interrupts * [ +5.001771] hpet1: lost 318 rtc interrupts * [ +5.000802] hpet1: lost 318 rtc interrupts * [ +5.001615] hpet1: lost 318 rtc interrupts * [Sep30 20:53] hpet1: lost 318 rtc interrupts * [ +5.003061] hpet1: lost 318 rtc interrupts * [ +5.000424] hpet1: lost 318 rtc interrupts * [ +5.000873] hpet1: lost 319 rtc interrupts * [ +5.000665] hpet1: lost 318 rtc interrupts * [ +5.001197] hpet1: lost 318 rtc interrupts * [ +5.002017] hpet1: lost 320 rtc interrupts * [ +5.002157] hpet1: lost 318 rtc interrupts * [ +5.002040] hpet1: lost 318 rtc interrupts * [ +5.001711] hpet1: lost 318 rtc interrupts * [ +5.001198] hpet1: lost 318 rtc interrupts * [ +5.001669] hpet1: lost 318 rtc interrupts * [Sep30 20:54] hpet1: lost 318 rtc interrupts * [ +5.001843] hpet1: lost 319 rtc interrupts * [ +5.002795] hpet1: lost 319 rtc interrupts * [ +5.000894] hpet1: lost 318 rtc interrupts * [ +5.003219] hpet1: lost 318 rtc interrupts * [ +5.000285] hpet1: lost 319 rtc interrupts * [ +5.000959] hpet1: lost 318 rtc interrupts * [ +5.000622] hpet1: lost 319 rtc interrupts * [ +5.001933] hpet1: lost 318 rtc interrupts * [ +5.001131] hpet1: lost 318 rtc interrupts * [ +5.001694] hpet1: lost 318 rtc interrupts * [ +5.001426] hpet1: lost 319 rtc interrupts * [Sep30 20:55] hpet1: lost 319 rtc interrupts * [ +5.001246] hpet1: lost 318 rtc interrupts * [ +5.001841] hpet1: lost 318 rtc interrupts * [ +5.001047] hpet1: lost 318 rtc interrupts * [ +5.004504] hpet1: lost 318 rtc interrupts * [ +4.997965] hpet1: lost 319 rtc interrupts * [ +5.003827] hpet1: lost 318 rtc interrupts * [ +5.003248] hpet1: lost 318 rtc interrupts * [ +4.999020] hpet1: lost 319 rtc interrupts * [ +5.003822] hpet1: lost 320 rtc interrupts * [ +4.999049] hpet1: lost 318 rtc interrupts * [ +5.001319] hpet1: lost 320 rtc interrupts * [Sep30 20:56] hpet1: lost 318 rtc interrupts * [ +5.002639] hpet1: lost 318 rtc interrupts * [ +4.999757] hpet1: lost 319 rtc interrupts * [ +5.001430] hpet1: lost 318 rtc interrupts * [ +5.001202] hpet1: lost 318 rtc interrupts * * ==> kernel <== * 20:56:24 up 15 min, 0 users, load average: 0.05, 0.17, 0.24 * Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2018.05.3" * * ==> kube-addon-manager [8550cdbffaf0] <== * error: no objects passed to apply * error: no objects passed to apply * error when retrieving current configuration of: * Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" * INameNFO: =:= Re "concistorage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-appliedl-configuration":""]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.iyang wiml": th adGdoetn -manahtger tpsl://alocablhoest:l == * 8443/api/v1s/namespaces/kube-system/serviceaccounts/storage-provisioner: diaerviceaccount/storage-provisioner unchanged * INFO: == Kubernl tcp 127.0.0.1:8443:e connetes act: cddon recoonncile comnection refused * error when retrieving current configuration of: * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{mapp["leated apit 201Ver9-09sio-n":"30Tv1"20 "kin:3d":"Pod" 9"metadata":map["labels":map["addonmana:53+ger.00:k00 =ubern= * etes.INFO:io Le/mode":"Reaconcider election dle" "isaibledn. * tegINFrO: a== Kutbeion-test"rne:"stotes addon ensure completed at 2019-09-30T20:39:56+00:00 == * INFO: == Reconcrilinag wige-pth dreprecoated visioner"label] "na me":== * "storage-provisionIerNFO: =" "na= Recmeospace":"knciling with addube-son-maystnagerem" " labaenln otati== * onsservi":ceamap["ckubecctl.kuounbt/sernettoes.io/rage-last-proviasionpplieer d-counchnfianged * gurINFO:ation ==":" Kube"]] "spec"rnet:emap["contsainer addon rs":[econcmile cap["iomplemagePted aullPot 201licy":"IfNotPre9-s09-30enT20:t" "n39:5ame"8+:"storage-00provisione:r00 == * " "voINFO:l Leaduer elmeMoectuntsion "di:[msabalep["d. * mountPath":INFO:" == /Kuberntmetep" "sname ":"tmap"]] ddo"commnand": ens["/sture coomrage-pletprovied ats 2019ioner-"] "i09-mage"30T20::4"gcr.0:01+io/k800:s-00 =m=iniku * be/storINFOage-: =p= Rerconovicisiolinner:vg w1.8.1it"h de]] "hostpNetworecark":%!q(bool=true) "ted label ==se * rviceIAccouNFO: ntN==ame "Reco:"stnoracilinge-prg withovi siaddononer-" "vmanagolumeer las":bel =[= * maINp["hostPath":map["path":"/tmp" "type":"DirectFO: == Kubernetory"es a] "dnamdon re":"econctmp"i]l]e com]]}plet * ed at 2019-0from 9-30Tser20:ver40:0 for:2+00 "/e:00 t=c/k= * uberneteINFsO: Le/addaonsder/st elorageection-pr ovidissaibolnered.. * yaTmhle" conn: Gecteit hon tttops:/ the/loc searlhvost:e8r4 4lo3/apcalhi/v1ost:/name8443spac waes/sk urbe-sefuyssteemd -/ pdiods/dsto you rages-ppercifoyv itshe ionriger:h dt hosialt o rt cpport? 127 * .0.0.INFO:1 :=8=4 Kub4e3r:n ceteonnse act: ddonconne ensctiureon cormepleftuesde atd * 2019-09e-30T2rror:0:40 no :06+0o0:bje00 =cts = * paINFsOs:ed == toRec apponcilliy * nge rwrior th dewhenpre retrcatediev laibeng clu r== * reINFO:n =t c= Reconfiognciuratlinigo nw ioth f: * addon-Resmanaoguerrc elab: "el =/v1, = * ResourINFOc:e == =serKubevicrneeaccotes untasd"d,o n reGroconupVercilsione coKinmpled: "/tedv1, Kaitn d2=Se0r1v9i-09ceAc-30Tc2oun0:4t" * 0:0Name6+00:: "st00 =o=r * age-proINFO:v iLseiaderoner ele", ctioNna dismesablepacd. * e: "kubThe ce-soynstenectim" * on to Otbjeche set: rve&{mr loap[cal"apihosVerst:84ion"43 :"v1was " "refkind"use:d" S-e rdviceiAdc yocouu spnt" "ecifmety tadathe ra":ighmapt h["loabst oer ls"p:omratp?[ * "addINFOonma: =n= ageKubr.keurbneertneetess ad.doio/mn eonsude"r:"e comRecopletncieled at "] 2019"na-m0e9":"-30sTt2o0r:age40:-1prov1+00i:sion00 e=r= * " "namINFO:esp a=ce":= R"kueconcbe-silinygs wittem"h dep "arnencoatattied laons":bel m=ap[= * "kubecINFO:tl.k == ubernReconetesc.ioiling/las witt-ah applddonied--mancaogner lfiguaberla == * tiINFO:on" == K:u""b]ern]]}ete * s from addson ervreecor foncirle co:m ple"t/ed etc/at 2kube019-rne09-tes3/0aT20:ddo4n0:11s/st+00:orage00 =-=p * rovisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error when retrieving current configuration of: * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["spec":map["containers":[map["command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]]]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]] "apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error: no objects passed to apply * error when retrieving current configuration of: * Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "apiVersion":"v1" "kind":"ServiceAccount"]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * error when retrieving current configuration of: * Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" * Name: "storage-provisioner", Namespace: "kube-system" * Object: &{map["apiVersion":"v1" "kind":"Pod" "metadata":map["labels":map["addonmanager.kubernetes.io/mode":"Reconcile" "integration-test":"storage-provisioner"] "name":"storage-provisioner" "namespace":"kube-system" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["containers":[map["name":"storage-provisioner" "volumeMounts":[map["mountPath":"/tmp" "name":"tmp"]] "command":["/storage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent"]] "hostNetwork":%!q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]]]} * from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused * * ==> kube-addon-manager [9f6e60fde582] <== * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:55:44+00:00 == * error: no objects passed to apply * error: no objects passed to apply * error: no objects passed to apply * error: no objects passed to apINFOply * : == Rerroreconc: no ilinog bjecwtsith padeprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:55:45+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:55:49+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * servssed to apply * error: no objects passed to apply * error: no objects passed to apply * error: no objects passed to apply * error: no objects passed to apply * iceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:55:50+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:55:54+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:55:55+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:55:59+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:56:00+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:56:04+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:56:05+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:56:08+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:56:10+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:56:13+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:56:15+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:56:18+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * serviceaccount/storage-provisioner unchanged * INFO: == Kubernetes addon reconcile completed at 2019-09-30T20:56:20+00:00 == * INFO: Leader election disabled. * INFO: == Kubernetes addon ensure completed at 2019-09-30T20:56:23+00:00 == * INFO: == Reconciling with deprecated label == * INFO: == Reconciling with addon-manager label == * * ==> kube-apiserver [52fa0bbf5a56] <== * I0927 23:27:39.505042 1 trace.go:116] Trace[443464257]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2019-09-27 23:27:38.912112042 +0000 UTC m=+14305.654812783) (total time: 591.612298ms): * Trace[443464257]: [591.568238ms] [591.281134ms] Transaction committed * I0927 23:27:39.505629 1 trace.go:116] Trace[1799649205]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-09-27 23:27:38.911955924 +0000 UTC m=+14305.654656663) (total time: 593.635342ms): * Trace[1799649205]: [593.540076ms] [593.43291ms] Object stored in database * I0927 23:30:34.896230 1 trace.go:116] Trace[1534572770]: "Get" url:/api/v1/namespaces/default/services/kubernetes (started: 2019-09-27 23:30:34.343751149 +0000 UTC m=+14481.086451874) (total time: 552.440172ms): * Trace[1534572770]: [552.370005ms] [552.328088ms] About to write a response * I0927 23:31:21.960219 1 trace.go:116] Trace[1856262886]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2019-09-27 23:31:21.442240865 +0000 UTC m=+14528.184941605) (total time: 517.846141ms): * Trace[1856262886]: [516.275795ms] [515.984189ms] Transaction committed * I0927 23:31:21.960531 1 trace.go:116] Trace[1183103709]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler (started: 2019-09-27 23:31:21.441946957 +0000 UTC m=+14528.184647730) (total time: 518.553298ms): * Trace[1183103709]: [518.382234ms] [518.237579ms] Object stored in database * I0927 23:31:23.340612 1 trace.go:116] Trace[1107627069]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2019-09-27 23:31:22.735778454 +0000 UTC m=+14529.478479189) (total time: 604.791192ms): * Trace[1107627069]: [604.74627ms] [604.488816ms] Transaction committed * I0927 23:31:23.340739 1 trace.go:116] Trace[1279275175]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager (started: 2019-09-27 23:31:22.735611102 +0000 UTC m=+14529.478311883) (total time: 605.10866ms): * Trace[1279275175]: [605.043299ms] [604.925697ms] Object stored in database * E0927 23:34:55.533859 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0927 23:47:41.614877 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0927 23:56:57.672804 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0928 00:06:21.706058 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 16:38:41.830795 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 16:56:45.938366 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 17:03:14.949725 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 17:18:43.020012 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 17:33:59.088531 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * I0930 17:43:43.803303 1 trace.go:116] Trace[353462436]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2019-09-30 17:43:43.299356289 +0000 UTC m=+21342.996023429) (total time: 503.913405ms): * Trace[353462436]: [503.913405ms] [503.913405ms] END * I0930 17:43:43.803534 1 trace.go:116] Trace[798069583]: "List" url:/apis/batch/v1beta1/cronjobs (started: 2019-09-30 17:43:43.299189419 +0000 UTC m=+21342.995856537) (total time: 504.329484ms): * Trace[798069583]: [504.185075ms] [504.024541ms] Listing from storage done * E0930 17:48:42.236600 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 17:58:14.344086 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * I0930 18:11:47.337424 1 trace.go:116] Trace[812487653]: "List etcd3" key:/configmaps/kube-system,resourceVersion:,limit:0,continue: (started: 2019-09-30 18:11:46.829888721 +0000 UTC m=+23026.526555838) (total time: 507.507936ms): * Trace[812487653]: [507.507936ms] [507.507936ms] END * I0930 18:11:47.337501 1 trace.go:116] Trace[1615916677]: "List" url:/api/v1/namespaces/kube-system/configmaps (started: 2019-09-30 18:11:46.829807812 +0000 UTC m=+23026.526474921) (total time: 507.683446ms): * Trace[1615916677]: [507.638005ms] [507.564382ms] Listing from storage done * E0930 18:12:28.437780 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 18:18:25.504256 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 18:35:51.559807 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 18:44:10.580085 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 18:58:29.652142 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 19:11:19.696361 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 19:20:24.791108 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 19:33:29.861679 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 19:45:29.942212 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 19:55:11.030722 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 20:07:09.141615 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 20:14:18.172351 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * E0930 20:29:32.207455 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted * I0930 20:40:01.804284 1 controller.go:87] Shutting down OpenAPI AggregationController * I0930 20:40:01.831078 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller * I0930 20:40:01.830025 1 controller.go:122] Shutting down OpenAPI controller * I0930 20:40:01.810778 1 controller.go:182] Shutting down kubernetes service endpoint reconciler * I0930 20:40:01.840522 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController * I0930 20:40:01.834648 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController * I0930 20:40:01.840952 1 naming_controller.go:299] Shutting down NamingConditionController * I0930 20:40:01.840556 1 establishing_controller.go:84] Shutting down EstablishingController * I0930 20:40:01.841329 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController * I0930 20:40:01.841347 1 crd_finalizer.go:286] Shutting down CRDFinalizer * I0930 20:40:01.841361 1 available_controller.go:395] Shutting down AvailableConditionController * I0930 20:40:01.841374 1 autoregister_controller.go:164] Shutting down autoregister controller * I0930 20:40:01.842625 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController * E0930 20:40:01.857315 1 controller.go:185] no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service * * ==> kube-apiserver [e7d00f428177] <== * I0930 20:42:09.888792 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.898683 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.898989 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.907902 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.908285 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.918259 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.918541 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.927688 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.927732 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.935143 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.936049 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.945047 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.945300 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.958367 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.958665 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.968322 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.968546 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:09.978286 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:09.978359 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * W0930 20:42:10.183327 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources. * W0930 20:42:10.213367 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. * W0930 20:42:10.233499 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. * W0930 20:42:10.238159 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. * W0930 20:42:10.252455 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. * W0930 20:42:10.283225 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. * W0930 20:42:10.283576 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. * I0930 20:42:10.300652 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. * I0930 20:42:10.300765 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. * I0930 20:42:10.302897 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:10.303001 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:10.315112 1 client.go:361] parsed scheme: "endpoint" * I0930 20:42:10.315338 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0930 20:42:12.680339 1 secure_serving.go:123] Serving securely on [::]:8443 * I0930 20:42:12.683459 1 available_controller.go:383] Starting AvailableConditionController * I0930 20:42:12.683577 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0930 20:42:12.683755 1 controller.go:81] Starting OpenAPI AggregationController * I0930 20:42:12.684207 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0930 20:42:12.684236 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0930 20:42:12.684827 1 crd_finalizer.go:274] Starting CRDFinalizer * I0930 20:42:12.691691 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0930 20:42:12.691726 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister * I0930 20:42:12.691746 1 autoregister_controller.go:140] Starting autoregister controller * I0930 20:42:12.691750 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0930 20:42:12.693467 1 controller.go:85] Starting OpenAPI controller * I0930 20:42:12.693493 1 customresource_discovery_controller.go:208] Starting DiscoveryController * I0930 20:42:12.693602 1 naming_controller.go:288] Starting NamingConditionController * I0930 20:42:12.693625 1 establishing_controller.go:73] Starting EstablishingController * I0930 20:42:12.693763 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController * I0930 20:42:12.693800 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController * E0930 20:42:12.810267 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.105, ResourceVersion: 0, AdditionalErrorMsg: * I0930 20:42:12.811826 1 shared_informer.go:204] Caches are synced for crd-autoregister * I0930 20:42:12.897369 1 cache.go:39] Caches are synced for autoregister controller * I0930 20:42:12.898757 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0930 20:42:12.991645 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0930 20:42:13.026702 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0930 20:42:13.681165 1 controller.go:107] OpenAPI AggregationController: Processing item * I0930 20:42:13.681418 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0930 20:42:13.681471 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0930 20:42:13.694223 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. * I0930 20:42:31.049937 1 controller.go:606] quota admission added evaluator for: endpoints * * ==> kube-controller-manager [19b0fda75153] <== * I0930 20:42:33.596349 1 tokencleaner.go:117] Starting token cleaner controller * I0930 20:42:33.596589 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner * I0930 20:42:33.697538 1 shared_informer.go:204] Caches are synced for token_cleaner * I0930 20:42:33.742558 1 node_lifecycle_controller.go:77] Sending events to api server * E0930 20:42:33.743275 1 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided * W0930 20:42:33.743295 1 controllermanager.go:526] Skipping "cloud-node-lifecycle" * I0930 20:42:33.895477 1 controllermanager.go:534] Started "podgc" * I0930 20:42:33.895737 1 gc_controller.go:75] Starting GC controller * I0930 20:42:33.895814 1 shared_informer.go:197] Waiting for caches to sync for GC * I0930 20:42:34.499914 1 controllermanager.go:534] Started "horizontalpodautoscaling" * I0930 20:42:34.500009 1 horizontal.go:156] Starting HPA controller * I0930 20:42:34.500452 1 shared_informer.go:197] Waiting for caches to sync for HPA * I0930 20:42:34.647502 1 controllermanager.go:534] Started "cronjob" * I0930 20:42:34.647691 1 cronjob_controller.go:96] Starting CronJob Manager * I0930 20:42:34.788866 1 controllermanager.go:534] Started "replicationcontroller" * I0930 20:42:34.788925 1 replica_set.go:182] Starting replicationcontroller controller * I0930 20:42:34.788931 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController * I0930 20:42:34.939315 1 controllermanager.go:534] Started "bootstrapsigner" * I0930 20:42:34.939536 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer * E0930 20:42:35.087936 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail * W0930 20:42:35.087967 1 controllermanager.go:526] Skipping "service" * I0930 20:42:35.237818 1 controllermanager.go:534] Started "csrcleaner" * I0930 20:42:35.239149 1 shared_informer.go:197] Waiting for caches to sync for garbage collector * I0930 20:42:35.239972 1 shared_informer.go:197] Waiting for caches to sync for resource quota * I0930 20:42:35.243407 1 cleaner.go:81] Starting CSR cleaner controller * W0930 20:42:35.311760 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0930 20:42:35.322329 1 shared_informer.go:204] Caches are synced for namespace * I0930 20:42:35.324028 1 shared_informer.go:204] Caches are synced for service account * I0930 20:42:35.339550 1 shared_informer.go:204] Caches are synced for TTL * I0930 20:42:35.341141 1 shared_informer.go:204] Caches are synced for bootstrap_signer * I0930 20:42:35.342964 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator * I0930 20:42:35.345732 1 shared_informer.go:204] Caches are synced for certificate * I0930 20:42:35.345971 1 shared_informer.go:204] Caches are synced for daemon sets * I0930 20:42:35.356128 1 shared_informer.go:204] Caches are synced for endpoint * I0930 20:42:35.363133 1 shared_informer.go:204] Caches are synced for PVC protection * I0930 20:42:35.390001 1 shared_informer.go:204] Caches are synced for certificate * I0930 20:42:35.395967 1 shared_informer.go:204] Caches are synced for GC * I0930 20:42:35.407875 1 shared_informer.go:204] Caches are synced for stateful set * I0930 20:42:35.418671 1 shared_informer.go:204] Caches are synced for taint * I0930 20:42:35.419074 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: * W0930 20:42:35.419376 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0930 20:42:35.419420 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal. * I0930 20:42:35.419942 1 taint_manager.go:186] Starting NoExecuteTaintManager * I0930 20:42:35.420049 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0f74c9c3-8e8f-40e5-9bb0-c585eee695fd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0930 20:42:35.509745 1 shared_informer.go:204] Caches are synced for ReplicaSet * I0930 20:42:35.534363 1 shared_informer.go:204] Caches are synced for deployment * I0930 20:42:35.600625 1 shared_informer.go:204] Caches are synced for HPA * I0930 20:42:35.649359 1 shared_informer.go:204] Caches are synced for disruption * I0930 20:42:35.649437 1 disruption.go:341] Sending events to api server. * I0930 20:42:35.667698 1 shared_informer.go:204] Caches are synced for job * I0930 20:42:35.689165 1 shared_informer.go:204] Caches are synced for ReplicationController * I0930 20:42:35.744595 1 shared_informer.go:204] Caches are synced for resource quota * I0930 20:42:35.797756 1 shared_informer.go:204] Caches are synced for resource quota * I0930 20:42:35.834231 1 shared_informer.go:204] Caches are synced for attach detach * I0930 20:42:35.839924 1 shared_informer.go:204] Caches are synced for garbage collector * I0930 20:42:35.842156 1 shared_informer.go:204] Caches are synced for expand * I0930 20:42:35.870429 1 shared_informer.go:204] Caches are synced for PV protection * I0930 20:42:35.890450 1 shared_informer.go:204] Caches are synced for persistent volume * I0930 20:42:35.900161 1 shared_informer.go:204] Caches are synced for garbage collector * I0930 20:42:35.900315 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * * ==> kube-proxy [8da07a52955c] <== * W0930 20:42:17.613629 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy * I0930 20:42:17.652320 1 node.go:135] Successfully retrieved node IP: 10.0.2.15 * I0930 20:42:17.652382 1 server_others.go:149] Using iptables Proxier. * W0930 20:42:17.654395 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic * I0930 20:42:17.655050 1 server.go:529] Version: v1.16.0 * I0930 20:42:17.657396 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0930 20:42:17.657446 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0930 20:42:17.657938 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I0930 20:42:17.665770 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0930 20:42:17.665837 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0930 20:42:17.666507 1 config.go:313] Starting service config controller * I0930 20:42:17.668236 1 shared_informer.go:197] Waiting for caches to sync for service config * I0930 20:42:17.691728 1 config.go:131] Starting endpoints config controller * I0930 20:42:17.691775 1 shared_informer.go:197] Waiting for caches to sync for endpoints config * I0930 20:42:17.769369 1 shared_informer.go:204] Caches are synced for service config * I0930 20:42:17.792539 1 shared_informer.go:204] Caches are synced for endpoints config * * ==> kube-proxy [b230d84a1477] <== * W0927 19:16:28.224101 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy * I0927 19:16:28.374476 1 node.go:135] Successfully retrieved node IP: 10.0.2.15 * I0927 19:16:28.374528 1 server_others.go:149] Using iptables Proxier. * W0927 19:16:28.376174 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic * I0927 19:16:28.376865 1 server.go:529] Version: v1.16.0 * I0927 19:16:28.384717 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0927 19:16:28.384864 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0927 19:16:28.385397 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I0927 19:16:28.393832 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0927 19:16:28.394600 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0927 19:16:28.395298 1 config.go:313] Starting service config controller * I0927 19:16:28.403643 1 shared_informer.go:197] Waiting for caches to sync for service config * I0927 19:16:28.404443 1 config.go:131] Starting endpoints config controller * I0927 19:16:28.404558 1 shared_informer.go:197] Waiting for caches to sync for endpoints config * I0927 19:16:28.522986 1 shared_informer.go:204] Caches are synced for service config * I0927 19:16:28.523190 1 shared_informer.go:204] Caches are synced for endpoints config * * ==> kube-scheduler [23db14f6e60f] <== * I0927 19:16:02.307894 1 serving.go:319] Generated self-signed cert in-memory * W0927 19:16:12.535744 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0927 19:16:12.538199 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0927 19:16:12.543849 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous. * W0927 19:16:12.543885 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0927 19:16:12.559405 1 server.go:143] Version: v1.16.0 * I0927 19:16:12.559505 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory * W0927 19:16:12.566224 1 authorization.go:47] Authorization is disabled * W0927 19:16:12.566514 1 authentication.go:79] Authentication is disabled * I0927 19:16:12.566775 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0927 19:16:12.577248 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 * E0927 19:16:12.741171 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0927 19:16:12.747945 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * E0927 19:16:12.748308 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0927 19:16:12.748670 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0927 19:16:12.748940 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0927 19:16:12.749340 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0927 19:16:12.750528 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0927 19:16:12.774076 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0927 19:16:12.775140 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0927 19:16:12.775183 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope * E0927 19:16:12.775199 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0927 19:16:13.794290 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0927 19:16:13.794866 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0927 19:16:13.794926 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0927 19:16:13.795005 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0927 19:16:13.799965 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0927 19:16:13.799965 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0927 19:16:13.800094 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0927 19:16:13.800463 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope * E0927 19:16:13.802188 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0927 19:16:13.802277 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * E0927 19:16:13.802334 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * I0927 19:16:15.797142 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... * I0927 19:16:15.818131 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler * E0930 20:40:01.951878 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=8911&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.951892 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.951999 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952023 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=348&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952055 1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=8909&timeoutSeconds=401&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952080 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m15s&timeoutSeconds=375&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952091 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952098 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=1&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952102 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=213&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.952133 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=39045&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * E0930 20:40:01.956033 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused * * ==> kube-scheduler [505490bf5336] <== * I0930 20:42:06.093589 1 serving.go:319] Generated self-signed cert in-memory * W0930 20:42:12.886003 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0930 20:42:12.886048 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found] * W0930 20:42:12.886064 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous. * W0930 20:42:12.886074 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0930 20:42:12.923402 1 server.go:143] Version: v1.16.0 * I0930 20:42:12.924809 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory * W0930 20:42:12.946730 1 authorization.go:47] Authorization is disabled * W0930 20:42:12.946956 1 authentication.go:79] Authentication is disabled * I0930 20:42:12.947121 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0930 20:42:12.948217 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259 * I0930 20:42:14.061770 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler... * I0930 20:42:31.053240 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler * * ==> kubelet <== * -- Logs begin at Mon 2019-09-30 20:41:12 UTC, end at Mon 2019-09-30 20:56:24 UTC. -- * Sep 30 20:42:10 minikube kubelet[2823]: E0930 20:42:10.984565 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.084990 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.185609 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.286054 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.386996 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.487491 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.588432 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.688817 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.789148 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.858831 2823 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.890026 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:11 minikube kubelet[2823]: E0930 20:42:11.990227 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.091546 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.191924 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.292785 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.393655 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.494773 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.596610 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.697434 2823 kubelet.go:2267] node "minikube" not found * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.801714 2823 kubelet.go:1647] Trying to delete pod kube-controller-manager-minikube_kube-system c2877368-15b7-4d5a-8832-f09b55c467a3 * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.817059 2823 reflector.go:123] object-"kube-system"/"storage-provisioner-token-46k2q": Failed to list *v1.Secret: secrets "storage-provisioner-token-46k2q" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.817595 2823 reflector.go:123] object-"kube-system"/"coredns-token-wkwtb": Failed to list *v1.Secret: secrets "coredns-token-wkwtb" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.819636 2823 reflector.go:123] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.819935 2823 reflector.go:123] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object * Sep 30 20:42:12 minikube kubelet[2823]: E0930 20:42:12.820036 2823 reflector.go:123] object-"kube-system"/"kube-proxy-token-nx7w8": Failed to list *v1.Secret: secrets "kube-proxy-token-nx7w8" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887288 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nx7w8" (UniqueName: "kubernetes.io/secret/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy-token-nx7w8") pod "kube-proxy-x2cft" (UID: "744bdd73-1e6c-4a6d-a0d8-c4fed74b613c") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887410 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-xtables-lock") pod "kube-proxy-x2cft" (UID: "744bdd73-1e6c-4a6d-a0d8-c4fed74b613c") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887531 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-46k2q" (UniqueName: "kubernetes.io/secret/23ef95a5-0099-4914-8d34-ee73337efe3d-storage-provisioner-token-46k2q") pod "storage-provisioner" (UID: "23ef95a5-0099-4914-8d34-ee73337efe3d") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887563 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-config-volume") pod "coredns-5644d7b6d9-nsvld" (UID: "6a72a4a0-3572-4387-a81a-3dcd1b3a4999") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887724 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy") pod "kube-proxy-x2cft" (UID: "744bdd73-1e6c-4a6d-a0d8-c4fed74b613c") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887773 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-lib-modules") pod "kube-proxy-x2cft" (UID: "744bdd73-1e6c-4a6d-a0d8-c4fed74b613c") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887808 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/23ef95a5-0099-4914-8d34-ee73337efe3d-tmp") pod "storage-provisioner" (UID: "23ef95a5-0099-4914-8d34-ee73337efe3d") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887839 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wkwtb" (UniqueName: "kubernetes.io/secret/83331081-1587-4a20-8eb8-3c6159ab8db2-coredns-token-wkwtb") pod "coredns-5644d7b6d9-drt9w" (UID: "83331081-1587-4a20-8eb8-3c6159ab8db2") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887941 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wkwtb" (UniqueName: "kubernetes.io/secret/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-coredns-token-wkwtb") pod "coredns-5644d7b6d9-nsvld" (UID: "6a72a4a0-3572-4387-a81a-3dcd1b3a4999") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.887985 2823 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/83331081-1587-4a20-8eb8-3c6159ab8db2-config-volume") pod "coredns-5644d7b6d9-drt9w" (UID: "83331081-1587-4a20-8eb8-3c6159ab8db2") * Sep 30 20:42:12 minikube kubelet[2823]: I0930 20:42:12.888000 2823 reconciler.go:154] Reconciler: start to sync state * Sep 30 20:42:13 minikube kubelet[2823]: W0930 20:42:13.167360 2823 kubelet.go:1651] Deleted mirror pod "kube-controller-manager-minikube_kube-system(c2877368-15b7-4d5a-8832-f09b55c467a3)" because it is outdated * Sep 30 20:42:13 minikube kubelet[2823]: I0930 20:42:13.196175 2823 kubelet_node_status.go:114] Node minikube was previously registered * Sep 30 20:42:13 minikube kubelet[2823]: I0930 20:42:13.196446 2823 kubelet_node_status.go:75] Successfully registered node minikube * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.000814 2823 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.000932 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy\" (\"744bdd73-1e6c-4a6d-a0d8-c4fed74b613c\")" failed. No retries permitted until 2019-09-30 20:42:14.50089565 +0000 UTC m=+18.420379387 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy\") pod \"kube-proxy-x2cft\" (UID: \"744bdd73-1e6c-4a6d-a0d8-c4fed74b613c\") : failed to sync configmap cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.000969 2823 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-46k2q: failed to sync secret cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.001024 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/23ef95a5-0099-4914-8d34-ee73337efe3d-storage-provisioner-token-46k2q\" (\"23ef95a5-0099-4914-8d34-ee73337efe3d\")" failed. No retries permitted until 2019-09-30 20:42:14.500999657 +0000 UTC m=+18.420483391 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-46k2q\" (UniqueName: \"kubernetes.io/secret/23ef95a5-0099-4914-8d34-ee73337efe3d-storage-provisioner-token-46k2q\") pod \"storage-provisioner\" (UID: \"23ef95a5-0099-4914-8d34-ee73337efe3d\") : failed to sync secret cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.001044 2823 configmap.go:203] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.001088 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-config-volume\" (\"6a72a4a0-3572-4387-a81a-3dcd1b3a4999\")" failed. No retries permitted until 2019-09-30 20:42:14.501066226 +0000 UTC m=+18.420549960 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-config-volume\") pod \"coredns-5644d7b6d9-nsvld\" (UID: \"6a72a4a0-3572-4387-a81a-3dcd1b3a4999\") : failed to sync configmap cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.001906 2823 secret.go:198] Couldn't get secret kube-system/coredns-token-wkwtb: failed to sync secret cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.001976 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-coredns-token-wkwtb\" (\"6a72a4a0-3572-4387-a81a-3dcd1b3a4999\")" failed. No retries permitted until 2019-09-30 20:42:14.501948615 +0000 UTC m=+18.421432341 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-wkwtb\" (UniqueName: \"kubernetes.io/secret/6a72a4a0-3572-4387-a81a-3dcd1b3a4999-coredns-token-wkwtb\") pod \"coredns-5644d7b6d9-nsvld\" (UID: \"6a72a4a0-3572-4387-a81a-3dcd1b3a4999\") : failed to sync secret cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002002 2823 secret.go:198] Couldn't get secret kube-system/coredns-token-wkwtb: failed to sync secret cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002046 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/83331081-1587-4a20-8eb8-3c6159ab8db2-coredns-token-wkwtb\" (\"83331081-1587-4a20-8eb8-3c6159ab8db2\")" failed. No retries permitted until 2019-09-30 20:42:14.502025358 +0000 UTC m=+18.421509115 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-wkwtb\" (UniqueName: \"kubernetes.io/secret/83331081-1587-4a20-8eb8-3c6159ab8db2-coredns-token-wkwtb\") pod \"coredns-5644d7b6d9-drt9w\" (UID: \"83331081-1587-4a20-8eb8-3c6159ab8db2\") : failed to sync secret cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002068 2823 configmap.go:203] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002109 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/83331081-1587-4a20-8eb8-3c6159ab8db2-config-volume\" (\"83331081-1587-4a20-8eb8-3c6159ab8db2\")" failed. No retries permitted until 2019-09-30 20:42:14.502088971 +0000 UTC m=+18.421572714 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83331081-1587-4a20-8eb8-3c6159ab8db2-config-volume\") pod \"coredns-5644d7b6d9-drt9w\" (UID: \"83331081-1587-4a20-8eb8-3c6159ab8db2\") : failed to sync configmap cache: timed out waiting for the condition" * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002136 2823 secret.go:198] Couldn't get secret kube-system/kube-proxy-token-nx7w8: failed to sync secret cache: timed out waiting for the condition * Sep 30 20:42:14 minikube kubelet[2823]: E0930 20:42:14.002177 2823 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy-token-nx7w8\" (\"744bdd73-1e6c-4a6d-a0d8-c4fed74b613c\")" failed. No retries permitted until 2019-09-30 20:42:14.502157508 +0000 UTC m=+18.421641244 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-nx7w8\" (UniqueName: \"kubernetes.io/secret/744bdd73-1e6c-4a6d-a0d8-c4fed74b613c-kube-proxy-token-nx7w8\") pod \"kube-proxy-x2cft\" (UID: \"744bdd73-1e6c-4a6d-a0d8-c4fed74b613c\") : failed to sync secret cache: timed out waiting for the condition" * Sep 30 20:42:15 minikube kubelet[2823]: W0930 20:42:15.041538 2823 pod_container_deletor.go:75] Container "e1012b76faa93c9c0997002287421e82df16bc004eba9a4845df22654bfde4a0" not found in pod's containers * Sep 30 20:42:15 minikube kubelet[2823]: W0930 20:42:15.510377 2823 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-drt9w through plugin: invalid network status for * Sep 30 20:42:15 minikube kubelet[2823]: W0930 20:42:15.518981 2823 pod_container_deletor.go:75] Container "c8e42b515e5a0647911e671281a17b15b754cfdaf66080ee4f81c5154caf2e79" not found in pod's containers * Sep 30 20:42:15 minikube kubelet[2823]: W0930 20:42:15.863320 2823 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-nsvld through plugin: invalid network status for * Sep 30 20:42:16 minikube kubelet[2823]: W0930 20:42:16.651639 2823 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-drt9w through plugin: invalid network status for * Sep 30 20:42:16 minikube kubelet[2823]: W0930 20:42:16.715021 2823 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-nsvld through plugin: invalid network status for * Sep 30 20:42:47 minikube kubelet[2823]: E0930 20:42:47.302596 2823 pod_workers.go:191] Error syncing pod 23ef95a5-0099-4914-8d34-ee73337efe3d ("storage-provisioner_kube-system(23ef95a5-0099-4914-8d34-ee73337efe3d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(23ef95a5-0099-4914-8d34-ee73337efe3d)" * * ==> storage-provisioner [b5a0a32a549f] <== * F0930 20:42:46.725047 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [e9b0a3d4a08b] <==
SolrSeeker commented 4 years ago

minikube 1.4.0 kubectl 1.16.0 Virtualbox 6.0.12 r133076

minikube ssh sudo pgrep apiserver

3179

minikube status

host: Running kubelet: Running apiserver: Stopped kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.105
tstromberg commented 4 years ago

@SolrSeeker - Thank you for confirming my suspicion. Any chance you could also run these commands to help me sort out what kind of networking issue we are seeing here?

SolrSeeker commented 4 years ago

minikube ssh "ss -tlpn"

State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 0 0 192.168.99.105:2379 0.0.0.0:* LISTEN 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 0 0 192.168.99.105:2380 0.0.0.0:* LISTEN 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 0 0 0.0.0.0:36529 0.0.0.0:* LISTEN 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 0 0.0.0.0:44697 0.0.0.0:* LISTEN 0 0 0.0.0.0:57145 0.0.0.0:* LISTEN 0 0 0.0.0.0:36635 0.0.0.0:* LISTEN 0 0 0.0.0.0:45693 0.0.0.0:* LISTEN 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 0 0 127.0.0.1:42755 0.0.0.0:* LISTEN 0 0 *:2376 *:* LISTEN 0 0 *:10250 *:* LISTEN 0 0 *:10251 *:* LISTEN 0 0 *:5355 *:* LISTEN 0 0 *:10252 *:* LISTEN 0 0 *:10255 *:* LISTEN 0 0 *:111 *:* LISTEN 0 0 *:10256 *:* LISTEN 0 0 *:22 *:* LISTEN 0 0 *:8443 *:* LISTEN 0 0 *:39553 *:*

for i in 22 111 8443 10250; do nc -zvvv $(minikube ip) $i; done

nc: connectx to 192.168.99.105 port 22 (tcp) failed: Operation timed out nc: connectx to 192.168.99.105 port 111 (tcp) failed: Operation timed out nc: connectx to 192.168.99.105 port 8443 (tcp) failed: Operation timed out nc: connectx to 192.168.99.105 port 10250 (tcp) failed: Operation timed out

ping -c1 $(minikube ip)

PING 192.168.99.105 (192.168.99.105): 56 data bytes --- 192.168.99.105 ping statistics --- 1 packets transmitted, 0 packets received, 100.0% packet loss

rpcinfo $(minikube ip)

Can't contact rpcbind on 192.168.99.105 rpcinfo: RPC: Timed out

route get $(minikube ip)

route to: 192.168.99.105 destination: 192.168.99.105 interface: vboxnet0 flags: recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire 0 0 0 0 0 0 1500 -8128

ifconfig vboxnet0

vboxnet0: flags=8943 mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255

VBoxManage list hostonlyifs

Name: vboxnet0 GUID: 786f6276-656e-4074-8000-0a0027000000 DHCP: Disabled IPAddress: 192.168.99.1 NetworkMask: 255.255.255.0 IPV6Address: IPV6NetworkMaskPrefixLength: 0 HardwareAddress: 0a:00:27:00:00:00 MediumType: Ethernet Wireless: No Status: Up VBoxNetworkName: HostInterfaceNetworking-vboxnet0
tstromberg commented 4 years ago

As mentioned in #3022, some workarounds people have found:

Root cause remains unknown.

bauson-com commented 4 years ago

Hi, I experienced the same problem on macOS Mojave 10.14.5 and fixed it using suggestions from other issue threads. Reinstalling both VirtualBox and minikube did not fix anything - the source of my problem was being connected to Cisco AnyConnect VPN. Here are the steps I used:

$ minikube stop $ minikube delete $ brew cask uninstall minikube $ rm -rf ~/.minikube ~/.kube Go to https://www.virtualbox.org/wiki/Downloads, use VirtualBox_Uninstall.tool script provided in OS X host .dm file Disconnect from VPN Restart laptop, make sure that you are not reconnected to VPN Install VirtualBox using VirtualBox.pkg from the same .dmg file as the previous step. $ brew cask install minikube $ minikube start --alsologtostderr -v=9 Connect to VPN (if you wish)

Hope this helps. If anyone has suggestions for starting minikube while connected to a VPN like Cisco AnyConnect, please let me know! Thanks

Does it mean I need to be disconnected to VPN every time I start minikube? Or there is a work around to make it work regardless if connected or not connected to Cisco AnyConnect VPN?

Thanks btw

den-is commented 4 years ago

@bauson-com only disconnect will not help you will have to restart mac (or maybe there is some routes refresh commnd.. idk...)

So here are my observations, actually same one thing just from different points of view:

  1. minikube-virtualbox will always work if you have never started AnyConnect VPN before minikube manipulations

  2. You can enable Anyconnect VPN after minikube has finished bootstrapping and/or starting minikube - everything will work

  3. You will not be able setup minikube or start minikube if you had anyconnect enabled in you current session, even if you quit it, you will have to restart mac.

  4. I have my very own handmade kubernetes on virtualbox setup using Vagrant+Ansible. a.k.a pure environement for expermentations not biased by minikube. I observe absolutely same issue. Virtualbox can't add routes after anyconnect. But in addition I use custom host-only virtualbox subnet 10.22.0.0/24 for my kubernetes.

here are netstat -nr outputs

# First ViartualBox was up then enabled AnyConnect... all routes exist
Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
default            10.1.1.1           UGSc          127        0     en9
1.0.0.1/32         10.1.1.1           UGSc            0        0     en9
1.1.1.1/32         10.1.1.1           UGSc            0        0     en9
10.1.1/24          link#7             UCS             2        0     en9      !
10.1.1.1           0:c:42:b1:c8:19    UHLSr          55        0     en9
10.1.1.5           0:11:32:67:64:23   UHLWIi          2    16728     en9   1052
10.1.1.103         18:65:90:2b:4:1    UHLWIi          1      182     en9   1065
10.1.1.197/32      link#7             UCS             0        0     en9      !
10.22/24           link#22            UC              3        0 vboxnet      !
10.22.0.2          8:0:27:e8:cc:fc    UHLWI           0        0 vboxnet    500
10.22.0.21         8:0:27:0:57:e      UHLWIi          2     4978 vboxnet    633
10.22.0.31         8:0:27:93:45:d     UHLWI           0        0 vboxnet    588
10.66.100/24       10.55.78.22       UGSc            0        0   utun2
10.70/16           10.55.78.22       UGSc            0        0   utun2
10.55/16          10.55.78.22       UGSc            0        0   utun2
10.55.10.17/32    10.55.78.22       UGSc            1        0   utun2
10.55.10.18/32    10.55.78.22       UGSc            0        0   utun2
10.55.78.22/32    127.0.0.1          UGSc            7        0     lo0
10.201/16          10.55.78.22       UGSc            0        0   utun2
127                127.0.0.1          UCS             0        0     lo0
127.0.0.1          127.0.0.1          UH             49   127320     lo0
169.254            link#7             UCS             1        0     en9      !
169.254            link#19            UCSI            1        0     en8      !
169.254.169.254    link#7             UHLSW           0       24     en9      !
169.254.219.207    1a:65:90:2b:4:fc   UHLSW           2       79     en8    109
169.254.233.102/32 link#19            UCS             0        0     en8      !
192.168.5          10.55.78.22       UGSc            0        0   utun2
208.122.223.158/32 10.1.1.1           UGSc            1        0     en9
224.0.0/4          link#7             UmCS            2        0     en9      !
224.0.0/4          link#19            UmCSI           0        0     en8      !
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en9
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0       28     en9
255.255.255.255/32 link#7             UCS             0        0     en9      !
255.255.255.255/32 link#19            UCSI            0        0     en8      !
#VPN enabled before VirtualBox.. you can't see 10.22/24 anymore in bellow result
Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
default            10.1.1.1           UGSc          132        0     en9
1.0.0.1/32         10.1.1.1           UGSc            0        0     en9
1.1.1.1/32         10.1.1.1           UGSc            0        0     en9
10.1.1/24          link#7             UCS             3        0     en9      !
10.1.1.1           0:c:42:b1:c8:19    UHLSr          78        0     en9
10.1.1.5           0:11:32:67:64:23   UHLWIi          1     3264     en9   1127
10.1.1.103         18:65:90:2b:4:1    UHLWIi          2       11     en9    541
10.1.1.197/32      link#7             UCS             0        0     en9      !
10.66.100/24       10.55.78.39       UGSc            0        0   utun1
10.70/16           10.55.78.39       UGSc            1        0   utun1
10.55/16          10.55.78.39       UGSc            0        0   utun1
10.55.10.17/32    10.55.78.39       UGSc            1        0   utun1
10.55.10.18/32    10.55.78.39       UGSc            0        0   utun1
10.55.78.39/32    127.0.0.1          UGSc            7        0     lo0
10.201/16          10.55.78.39       UGSc            0        0   utun1
127                127.0.0.1          UCS             0        0     lo0
127.0.0.1          127.0.0.1          UH             39    99594     lo0
169.254            link#7             UCS             2        0     en9      !
169.254.23.138/32  link#16            UCS             1        0     en8      !
169.254.169.254    link#7             UHLSW           1       34     en9      !
169.254.219.207    1a:65:90:2b:4:fc   UHLSW           0        1     en8    533
192.168.5          10.55.78.39       UGSc            0        0   utun1
208.122.223.158/32 10.1.1.1           UGSc            1        0     en9
224.0.0/4          link#7             UmCS            2        0     en9      !
224.0.0/4          link#16            UmCSI           0        0     en8      !
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en9
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0       32     en9
255.255.255.255/32 link#7             UCS             0        0     en9      !
255.255.255.255/32 link#16            UCSI            0        0     en8      !

similar part is missing:

10.22/24           link#22            UC              3        0 vboxnet      !
10.22.0.2          8:0:27:e8:cc:fc    UHLWI           0        0 vboxnet    500
10.22.0.21         8:0:27:0:57:e      UHLWIi          2     4978 vboxnet    633
10.22.0.31         8:0:27:93:45:d     UHLWI           0        0 vboxnet    588
den-is commented 4 years ago

I have created issue on virtualbox forums https://forums.virtualbox.org/viewtopic.php?f=8&t=95310

bzied321 commented 4 years ago

@bauson-com FYI, I did not run into the same problem again (with or without my VPN connected) when starting minikube after the steps I recorded

tstromberg commented 4 years ago

NOTE: minikube v1.5 detects this issue, and provides an early exit if a network interface misconfiguration is detected:

  minikube is unable to connect to the VM: dial tcp 192.168.99.108:22: connect: operation timed out
This is likely due to one of two reasons:
- VPN or firewall interference
- virtualbox network configuration issue
Suggested workarounds:
- Disable your local VPN or firewall software
- Configure your local VPN or firewall to allow access to 192.168.99.108
- Restart or reinstall virtualbox
- Use an alternative --vm-driver
medyagh commented 4 years ago

@kmayura1 do you mind tryin with the latest version of minikube to see if the problem is detected by minikube ? (also there is a newer verison of virtualbox that I recommend upgrading to)

https://www.virtualbox.org/wiki/Changelog-6.1#v0

r0bnet commented 4 years ago

Had the same problem with Cisco AnyConnect. Had to reinstall VirtualBox and then it worked. I guess restarting machine or re-configure some routes should also do the job but I didn't try. minikube version: 1.9.0 VirtualBox version: 6.1.4

mtmk commented 3 years ago

Hi, I experienced the same problem on macOS Mojave 10.14.5 and fixed it using suggestions from other issue threads. Reinstalling both VirtualBox and minikube did not fix anything - the source of my problem was being connected to Cisco AnyConnect VPN. Here are the steps I used:

$ minikube stop $ minikube delete $ brew cask uninstall minikube $ rm -rf ~/.minikube ~/.kube Go to https://www.virtualbox.org/wiki/Downloads, use VirtualBox_Uninstall.tool script provided in OS X host .dm file Disconnect from VPN Restart laptop, make sure that you are not reconnected to VPN Install VirtualBox using VirtualBox.pkg from the same .dmg file as the previous step. $ brew cask install minikube $ minikube start --alsologtostderr -v=9 Connect to VPN (if you wish)

Hope this helps. If anyone has suggestions for starting minikube while connected to a VPN like Cisco AnyConnect, please let me know! Thanks

Thanks. I think the one fixed it for me was the rm -rf ~/.minikube ~/.kube line. I had a Minikube install from ages ago, the old config from that might've been interfering.