kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

ambassador is not a valid addon #9047

Closed neetra closed 4 years ago

neetra commented 4 years ago

Steps to reproduce the issue: minikube start --driver=hyperv minikube addons enable ambassador

Full output of failed command:

C:\WINDOWS\system32> minikube start --driver=hyperv

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. PS C:\WINDOWS\system32> minikube logs * ==> Docker <== * -- Logs begin at Fri 2020-08-21 11:42:25 UTC, end at Fri 2020-08-21 17:12:24 UTC. -- * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751187300Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751267931Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751657881Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751698296Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751722706Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751731809Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751753918Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751762121Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751769624Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751777727Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751785230Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751792533Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751799836Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751833949Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751843752Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751851355Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751858758Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751945792Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751991910Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.751998912Z" level=info msg="containerd successfully booted in 0.004341s" * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.764156493Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.764193107Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.764210213Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.764242426Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.765141272Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.765203296Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.765233507Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Aug 21 11:43:02 minikube dockerd[2689]: time="2020-08-21T11:43:02.765260318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.166926038Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167410279Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167443389Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167465695Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167489902Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167495104Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.167643647Z" level=info msg="Loading containers: start." * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.270763689Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.312159549Z" level=info msg="Loading containers: done." * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.335844449Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.335935575Z" level=info msg="Daemon has completed initialization" * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.355749848Z" level=info msg="API listen on /var/run/docker.sock" * Aug 21 11:43:07 minikube systemd[1]: Started Docker Application Container Engine. * Aug 21 11:43:07 minikube dockerd[2689]: time="2020-08-21T11:43:07.356477360Z" level=info msg="API listen on [::]:2376" * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.380877296Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/22148e377cfc31df3d12b0a1e805078b6d07d0c2054f1ffe62adb0e0653aabee/shim.sock" debug=false pid=3590 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.428924563Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0bdc57119bb74898a92beaa484ba7c4ab324b606a64c3de13c8cc91151a9ea43/shim.sock" debug=false pid=3614 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.436769252Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/22be69b6105c5821df365c6aa84017f90cbda5e8c1ea2fa85b4489d37e35391c/shim.sock" debug=false pid=3630 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.447702569Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/180b648fc86f558f3d11d30797b80068107ac0a24def8f842b2abbdab8f2b35d/shim.sock" debug=false pid=3646 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.732819833Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/291f69a033537e92fd32d8fa09f50ea069cae57e45cfc08019d3d9a0633c8657/shim.sock" debug=false pid=3784 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.883540748Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9f21f579b3bf37754942e17c364dee4f28f567f6a72f0c1dbe9e2e2b88e05e4f/shim.sock" debug=false pid=3866 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.901305413Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2fbfea3f6a0c5ae45f23e612601cba83f5aacd3e13ca604124bcd32e52f5d2ee/shim.sock" debug=false pid=3876 * Aug 21 11:43:22 minikube dockerd[2689]: time="2020-08-21T11:43:22.902127127Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/18e3d7d2e15d88e2052418f2b80d233e8fdcd7c66526f4da63aef561fe913554/shim.sock" debug=false pid=3880 * Aug 21 11:43:40 minikube dockerd[2689]: time="2020-08-21T11:43:40.177477419Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec57a7b636ec47481f0ca89564a7579c26db822bce0056a294aa5350bcf72e49/shim.sock" debug=false pid=4511 * Aug 21 11:43:40 minikube dockerd[2689]: time="2020-08-21T11:43:40.818018976Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b86ab6cf919e325311fbcff361fa4bb1238bfae423a5a191f222d8b2fa5dc398/shim.sock" debug=false pid=4550 * Aug 21 11:43:48 minikube dockerd[2689]: time="2020-08-21T11:43:48.272252724Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ad12858329fd5f49d56e9c851e3bee2400f45600ef97c17f0a42f6987e92b950/shim.sock" debug=false pid=4721 * Aug 21 11:43:48 minikube dockerd[2689]: time="2020-08-21T11:43:48.341995001Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f65a9216ba575bf50e286243e7a44c8c26a8bb76f601398022528e6babc438f4/shim.sock" debug=false pid=4749 * Aug 21 11:43:48 minikube dockerd[2689]: time="2020-08-21T11:43:48.533416115Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/423407b0c530fb0fc6a9ad0cad6af609791958bd6b675eda3654be99c42ee2cd/shim.sock" debug=false pid=4803 * Aug 21 11:43:48 minikube dockerd[2689]: time="2020-08-21T11:43:48.725317459Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d6af0013e3aa4e6918aac848f2f457e24f6ce0d5e931afd1843d7b2e48c17faa/shim.sock" debug=false pid=4849 * Aug 21 11:43:51 minikube dockerd[2689]: time="2020-08-21T11:43:51.087342854Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8f979164d39dc28f2d997fa0b3aa2e39e0929b43dafe509b2ec178602bb50dc4/shim.sock" debug=false pid=4904 * Aug 21 11:43:51 minikube dockerd[2689]: time="2020-08-21T11:43:51.391236194Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4abe08cd431086c3326ad7e67596e98836460da346bab3dec6ec366d6c6231bd/shim.sock" debug=false pid=4955 * Aug 21 12:20:39 minikube dockerd[2689]: time="2020-08-21T12:20:39.633095587Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a8777c57e6ff084a8049d73bb0721c4e3bda32a110bd8eafc3dcfb9862368af/shim.sock" debug=false pid=12728 * Aug 21 12:22:44 minikube dockerd[2689]: time="2020-08-21T12:22:44.536748509Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a88ca9992795ed3887dfa6bc5707c1c7d72bb452084f6f095529dee8385ac8bd/shim.sock" debug=false pid=13296 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * a88ca9992795e quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 16 minutes ago Running nginx-ingress-controller 0 0a8777c57e6ff * 4abe08cd43108 67da37a9a360e 55 minutes ago Running coredns 0 8f979164d39dc * d6af0013e3aa4 67da37a9a360e 55 minutes ago Running coredns 0 f65a9216ba575 * 423407b0c530f 4689081edb103 55 minutes ago Running storage-provisioner 0 ad12858329fd5 * b86ab6cf919e3 43940c34f24f3 55 minutes ago Running kube-proxy 0 ec57a7b636ec4 * 2fbfea3f6a0c5 303ce5db0e90d 55 minutes ago Running etcd 0 0bdc57119bb74 * 9f21f579b3bf3 74060cea7f704 55 minutes ago Running kube-apiserver 0 180b648fc86f5 * 18e3d7d2e15d8 d3e55153f52fb 55 minutes ago Running kube-controller-manager 0 22be69b6105c5 * 291f69a033537 a31f78c7c8ce1 55 minutes ago Running kube-scheduler 0 22148e377cfc3 * * ==> coredns [4abe08cd4310] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> coredns [d6af0013e3aa] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=93af9c1e43cab9618e301bc9fa720c63d5efa393 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_08_21T17_13_30_0700 * minikube.k8s.io/version=v1.9.2 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Fri, 21 Aug 2020 11:43:27 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Fri, 21 Aug 2020 12:39:08 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Fri, 21 Aug 2020 12:38:18 +0000 Fri, 21 Aug 2020 11:43:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Fri, 21 Aug 2020 12:38:18 +0000 Fri, 21 Aug 2020 11:43:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Fri, 21 Aug 2020 12:38:18 +0000 Fri, 21 Aug 2020 11:43:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Fri, 21 Aug 2020 12:38:18 +0000 Fri, 21 Aug 2020 11:43:47 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.18.4.170 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 5942944Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 5942944Ki * pods: 110 * System Info: * Machine ID: 8a9b93813d7a464d8b8312cb4fd625cb * System UUID: 3b0d48de-9ba2-d14c-a2d4-651f1552c4fc * Boot ID: 23371c34-f948-498c-92e7-22661937770e * Kernel Version: 4.19.107 * OS Image: Buildroot 2019.02.10 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.8 * Kubelet Version: v1.18.0 * Kube-Proxy Version: v1.18.0 * Non-terminated Pods: (9 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-66bff467f8-7q9hx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (2%) 55m * kube-system coredns-66bff467f8-mql9d 100m (5%) 0 (0%) 70Mi (1%) 170Mi (2%) 55m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55m * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 55m * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 55m * kube-system kube-proxy-dzxcz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55m * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 55m * kube-system nginx-ingress-controller-6d57c87cb9-4dcvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55m * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 750m (37%) 0 (0%) * memory 140Mi (2%) 340Mi (5%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 55m (x5 over 55m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 55m (x4 over 55m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 55m (x4 over 55m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 55m kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 55m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 55m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 55m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeNotReady 55m kubelet, minikube Node minikube status is now: NodeNotReady * Normal NodeAllocatableEnforced 55m kubelet, minikube Updated Node Allocatable limit across pods * Normal Starting 55m kube-proxy, minikube Starting kube-proxy. * Normal NodeReady 55m kubelet, minikube Node minikube status is now: NodeReady * * ==> dmesg <== * [Aug21 11:42] smpboot: 128 Processors exceeds NR_CPUS limit of 64 * [ +0.130786] You have booted with nomodeset. This means your GPU drivers are DISABLED * [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly * [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it * [ +0.071167] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. * [ +0.000001] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. * [ +0.042244] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.014550] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [ +2.643077] Unstable clock detected, switching default tracing clock to "global" * If you want to keep using the local clock, then add: * "trace_clock=local" * on the kernel command line * [ +0.000016] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +1.065932] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +0.428272] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument * [ +0.009290] systemd-fstab-generator[1285]: Ignoring "noauto" for root device * [ +0.001959] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [ +3.552502] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +1.054253] vboxguest: loading out-of-tree module taints kernel. * [ +0.004186] vboxguest: PCI device not found, probably running on physical hardware. * [ +19.004502] systemd-fstab-generator[2464]: Ignoring "noauto" for root device * [Aug21 11:43] kauditd_printk_skb: 65 callbacks suppressed * [ +3.052250] systemd-fstab-generator[2906]: Ignoring "noauto" for root device * [ +1.573119] systemd-fstab-generator[3120]: Ignoring "noauto" for root device * [ +9.405928] kauditd_printk_skb: 107 callbacks suppressed * [ +9.220508] systemd-fstab-generator[4228]: Ignoring "noauto" for root device * [ +10.561044] kauditd_printk_skb: 32 callbacks suppressed * [ +7.154422] kauditd_printk_skb: 38 callbacks suppressed * [ +11.255025] kauditd_printk_skb: 2 callbacks suppressed * [Aug21 11:44] NFSD: Unable to end grace period: -110 * [Aug21 12:12] kauditd_printk_skb: 2 callbacks suppressed * [Aug21 12:20] kauditd_printk_skb: 2 callbacks suppressed * [Aug21 12:23] kauditd_printk_skb: 20 callbacks suppressed * [Aug21 12:24] kauditd_printk_skb: 2 callbacks suppressed * * ==> etcd [2fbfea3f6a0c] <== * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b became follower at term 1 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b switched to configuration voters=(13747901456446478155) * 2020-08-21 11:43:24.041832 W | auth: simple token is not cryptographically signed * 2020-08-21 11:43:24.059707 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-08-21 11:43:24.062124 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-21 11:43:24.062359 I | embed: listening for metrics on http://127.0.0.1:2381 * 2020-08-21 11:43:24.062564 I | etcdserver: beca5b88fe91574b as single-node; fast-forwarding 9 ticks (election ticks 10) * 2020-08-21 11:43:24.062890 I | embed: listening for peers on 172.18.4.170:2380 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b switched to configuration voters=(13747901456446478155) * 2020-08-21 11:43:24.063253 I | etcdserver/membership: added member beca5b88fe91574b [https://172.18.4.170:2380] to cluster 692f40dc89a4d623 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b is starting a new election at term 1 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b became candidate at term 2 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b received MsgVoteResp from beca5b88fe91574b at term 2 * raft2020/08/21 11:43:24 INFO: beca5b88fe91574b became leader at term 2 * raft2020/08/21 11:43:24 INFO: raft.node: beca5b88fe91574b elected leader beca5b88fe91574b at term 2 * 2020-08-21 11:43:24.512600 I | etcdserver: published {Name:minikube ClientURLs:[https://172.18.4.170:2379]} to cluster 692f40dc89a4d623 * 2020-08-21 11:43:24.512879 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-08-21 11:43:24.513040 I | embed: ready to serve client requests * 2020-08-21 11:43:24.514654 I | embed: serving client requests on 172.18.4.170:2379 * 2020-08-21 11:43:24.514838 I | embed: ready to serve client requests * 2020-08-21 11:43:24.523670 I | embed: serving client requests on 127.0.0.1:2379 * 2020-08-21 11:43:24.530706 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-08-21 11:43:24.530985 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-08-21 11:43:33.238228 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-controller\" " with result "range_response_count:1 size:203" took too long (111.377465ms) to execute * 2020-08-21 11:43:35.962340 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (102.671199ms) to execute * 2020-08-21 11:43:36.913926 W | etcdserver: request "header: txn: success:> failure:<>>" with result "size:16" took too long (102.140995ms) to execute * 2020-08-21 11:43:36.914063 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:480" took too long (154.296233ms) to execute * 2020-08-21 11:43:37.138122 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (111.62572ms) to execute * 2020-08-21 11:43:38.034861 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/ttl-controller\" " with result "range_response_count:1 size:195" took too long (105.853893ms) to execute * 2020-08-21 11:43:38.242939 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-minikube\" " with result "range_response_count:1 size:4537" took too long (152.916918ms) to execute * 2020-08-21 11:43:38.248430 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:263" took too long (123.208383ms) to execute * 2020-08-21 11:43:39.776895 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (167.056582ms) to execute * 2020-08-21 11:43:39.777088 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (108.382893ms) to execute * 2020-08-21 11:53:24.573920 I | mvcc: store.index: compact 1058 * 2020-08-21 11:53:24.591762 I | mvcc: finished scheduled compaction at 1058 (took 17.456437ms) * 2020-08-21 11:58:24.590289 I | mvcc: store.index: compact 1716 * 2020-08-21 11:58:24.592038 I | mvcc: finished scheduled compaction at 1716 (took 1.342444ms) * 2020-08-21 11:59:35.059330 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (134.592088ms) to execute * 2020-08-21 12:00:45.062309 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:508" took too long (135.694845ms) to execute * 2020-08-21 12:03:24.597138 I | mvcc: store.index: compact 2374 * 2020-08-21 12:03:24.598440 I | mvcc: finished scheduled compaction at 2374 (took 1.079732ms) * 2020-08-21 12:03:26.352384 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:507" took too long (127.051133ms) to execute * 2020-08-21 12:03:26.933034 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (115.111283ms) to execute * 2020-08-21 12:03:26.933251 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (125.907399ms) to execute * 2020-08-21 12:08:24.604969 I | mvcc: store.index: compact 3036 * 2020-08-21 12:08:24.606600 I | mvcc: finished scheduled compaction at 3036 (took 1.29684ms) * 2020-08-21 12:13:24.611250 I | mvcc: store.index: compact 3693 * 2020-08-21 12:13:24.612586 I | mvcc: finished scheduled compaction at 3693 (took 896.029µs) * 2020-08-21 12:18:24.625854 I | mvcc: store.index: compact 4357 * 2020-08-21 12:18:24.627018 I | mvcc: finished scheduled compaction at 4357 (took 836.12µs) * 2020-08-21 12:22:43.732524 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:508" took too long (131.726135ms) to execute * 2020-08-21 12:23:24.635298 I | mvcc: store.index: compact 5015 * 2020-08-21 12:23:24.636878 I | mvcc: finished scheduled compaction at 5015 (took 1.194933ms) * 2020-08-21 12:25:23.210373 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (185.947302ms) to execute * 2020-08-21 12:28:24.642384 I | mvcc: store.index: compact 5715 * 2020-08-21 12:28:24.659089 I | mvcc: finished scheduled compaction at 5715 (took 16.163167ms) * 2020-08-21 12:33:24.648884 I | mvcc: store.index: compact 6423 * 2020-08-21 12:33:24.663533 I | mvcc: finished scheduled compaction at 6423 (took 14.170535ms) * 2020-08-21 12:38:24.675167 I | mvcc: store.index: compact 7119 * 2020-08-21 12:38:24.692812 I | mvcc: finished scheduled compaction at 7119 (took 17.341973ms) * * ==> kernel <== * 12:39:09 up 56 min, 0 users, load average: 0.14, 0.13, 0.12 * Linux minikube 4.19.107 #1 SMP Thu Mar 26 11:33:10 PDT 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.10" * * ==> kube-apiserver [9f21f579b3bf] <== * W0821 11:43:25.543476 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. * W0821 11:43:25.543579 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. * I0821 11:43:25.554465 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0821 11:43:25.554482 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0821 11:43:25.556267 1 client.go:361] parsed scheme: "endpoint" * I0821 11:43:25.556317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0821 11:43:25.564335 1 client.go:361] parsed scheme: "endpoint" * I0821 11:43:25.564482 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0821 11:43:27.542581 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0821 11:43:27.542713 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0821 11:43:27.542951 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0821 11:43:27.543349 1 secure_serving.go:178] Serving securely on [::]:8443 * I0821 11:43:27.543411 1 autoregister_controller.go:141] Starting autoregister controller * I0821 11:43:27.543417 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0821 11:43:27.543592 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0821 11:43:27.544351 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0821 11:43:27.544362 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * I0821 11:43:27.544378 1 controller.go:81] Starting OpenAPI AggregationController * I0821 11:43:27.544750 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0821 11:43:27.544761 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0821 11:43:27.544886 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0821 11:43:27.544895 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * I0821 11:43:27.548594 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0821 11:43:27.548621 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0821 11:43:27.549823 1 crd_finalizer.go:266] Starting CRDFinalizer * I0821 11:43:27.558772 1 available_controller.go:387] Starting AvailableConditionController * I0821 11:43:27.558790 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * E0821 11:43:27.568237 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.18.4.170, ResourceVersion: 0, AdditionalErrorMsg: * I0821 11:43:27.613033 1 controller.go:86] Starting OpenAPI controller * I0821 11:43:27.613180 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0821 11:43:27.613224 1 naming_controller.go:291] Starting NamingConditionController * I0821 11:43:27.613281 1 establishing_controller.go:76] Starting EstablishingController * I0821 11:43:27.613324 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0821 11:43:27.613349 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0821 11:43:27.654798 1 cache.go:39] Caches are synced for autoregister controller * I0821 11:43:27.655289 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0821 11:43:27.656169 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0821 11:43:27.656207 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0821 11:43:27.658975 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0821 11:43:28.542459 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0821 11:43:28.542632 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0821 11:43:28.548017 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0821 11:43:28.554205 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0821 11:43:28.554250 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0821 11:43:29.112055 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0821 11:43:29.159930 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0821 11:43:29.234798 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.18.4.170] * I0821 11:43:29.236305 1 controller.go:606] quota admission added evaluator for: endpoints * I0821 11:43:29.257825 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0821 11:43:30.559406 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0821 11:43:30.580860 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0821 11:43:30.614841 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0821 11:43:30.749492 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0821 11:43:38.993900 1 controller.go:606] quota admission added evaluator for: replicasets.apps * I0821 11:43:39.221184 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps * W0821 11:55:40.077378 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * W0821 12:08:00.127293 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * W0821 12:14:09.227409 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * W0821 12:30:07.239319 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * W0821 12:38:39.276265 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [18e3d7d2e15d] <== * I0821 11:43:38.330384 1 controllermanager.go:533] Started "persistentvolume-binder" * I0821 11:43:38.330587 1 pv_controller_base.go:295] Starting persistent volume controller * I0821 11:43:38.330622 1 shared_informer.go:223] Waiting for caches to sync for persistent volume * I0821 11:43:38.527066 1 controllermanager.go:533] Started "endpoint" * I0821 11:43:38.527146 1 endpoints_controller.go:182] Starting endpoint controller * I0821 11:43:38.527155 1 shared_informer.go:223] Waiting for caches to sync for endpoint * I0821 11:43:38.776681 1 controllermanager.go:533] Started "replicationcontroller" * I0821 11:43:38.776869 1 replica_set.go:181] Starting replicationcontroller controller * I0821 11:43:38.776883 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController * I0821 11:43:38.927130 1 controllermanager.go:533] Started "csrsigning" * I0821 11:43:38.927985 1 certificate_controller.go:119] Starting certificate controller "csrsigning" * I0821 11:43:38.928216 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning * I0821 11:43:38.929000 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0821 11:43:38.955343 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0821 11:43:38.956828 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key * I0821 11:43:38.976201 1 shared_informer.go:230] Caches are synced for service account * I0821 11:43:38.976887 1 shared_informer.go:230] Caches are synced for deployment * I0821 11:43:38.978308 1 shared_informer.go:230] Caches are synced for expand * I0821 11:43:38.979579 1 shared_informer.go:230] Caches are synced for PV protection * I0821 11:43:38.979688 1 shared_informer.go:230] Caches are synced for PVC protection * W0821 11:43:38.989771 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0821 11:43:38.994589 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0821 11:43:39.009747 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0821 11:43:39.020419 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0821 11:43:39.027268 1 shared_informer.go:230] Caches are synced for job * I0821 11:43:39.027441 1 shared_informer.go:230] Caches are synced for GC * I0821 11:43:39.029944 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0821 11:43:39.030774 1 shared_informer.go:230] Caches are synced for persistent volume * I0821 11:43:39.039171 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f3fa09c2-ea96-46d5-ba07-c7b57b512002", APIVersion:"apps/v1", ResourceVersion:"189", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2 * I0821 11:43:39.041614 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0821 11:43:39.048321 1 shared_informer.go:230] Caches are synced for namespace * I0821 11:43:39.068993 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f3df7267-fb16-4842-b873-7a394650329d", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7q9hx * I0821 11:43:39.081588 1 shared_informer.go:230] Caches are synced for attach detach * I0821 11:43:39.087419 1 shared_informer.go:230] Caches are synced for TTL * I0821 11:43:39.126908 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f3df7267-fb16-4842-b873-7a394650329d", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-mql9d * E0821 11:43:39.156432 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again * E0821 11:43:39.156884 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again * I0821 11:43:39.213978 1 shared_informer.go:230] Caches are synced for daemon sets * I0821 11:43:39.229919 1 shared_informer.go:230] Caches are synced for taint * I0821 11:43:39.229995 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0821 11:43:39.230125 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0821 11:43:39.230166 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode. * I0821 11:43:39.230334 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0821 11:43:39.231195 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"8b696ead-4002-4d91-9f5b-52828d1537ce", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0821 11:43:39.247857 1 shared_informer.go:230] Caches are synced for stateful set * I0821 11:43:39.262128 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"9314ca5c-accd-446b-85bf-eff27ed48521", APIVersion:"apps/v1", ResourceVersion:"195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-dzxcz * I0821 11:43:39.476512 1 shared_informer.go:230] Caches are synced for HPA * I0821 11:43:39.534677 1 shared_informer.go:230] Caches are synced for endpoint * I0821 11:43:39.535363 1 shared_informer.go:230] Caches are synced for disruption * I0821 11:43:39.535372 1 disruption.go:339] Sending events to api server. * I0821 11:43:39.557363 1 shared_informer.go:230] Caches are synced for resource quota * I0821 11:43:39.578599 1 shared_informer.go:230] Caches are synced for garbage collector * I0821 11:43:39.578805 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0821 11:43:39.579042 1 shared_informer.go:230] Caches are synced for ReplicationController * I0821 11:43:39.579505 1 shared_informer.go:230] Caches are synced for resource quota * I0821 11:43:39.594288 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0821 11:43:39.629191 1 shared_informer.go:230] Caches are synced for garbage collector * I0821 11:43:49.230681 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. * I0821 12:20:39.026875 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"f4d61d3c-8346-4b90-8a69-90e04b259ebf", APIVersion:"apps/v1", ResourceVersion:"5320", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-6d57c87cb9 to 1 * I0821 12:20:39.052471 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-6d57c87cb9", UID:"1e1eb8a2-3ca8-40fe-ba3c-17725bf9c566", APIVersion:"apps/v1", ResourceVersion:"5321", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-6d57c87cb9-4dcvf * * ==> kube-proxy [b86ab6cf919e] <== * W0821 11:43:40.979307 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0821 11:43:40.985725 1 node.go:136] Successfully retrieved node IP: 172.18.4.170 * I0821 11:43:40.985765 1 server_others.go:186] Using iptables Proxier. * W0821 11:43:40.985772 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0821 11:43:40.985780 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0821 11:43:40.986193 1 server.go:583] Version: v1.18.0 * I0821 11:43:40.986502 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0821 11:43:40.986523 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0821 11:43:40.986605 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0821 11:43:40.986641 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0821 11:43:40.986950 1 config.go:315] Starting service config controller * I0821 11:43:40.986957 1 shared_informer.go:223] Waiting for caches to sync for service config * I0821 11:43:40.986966 1 config.go:133] Starting endpoints config controller * I0821 11:43:40.986972 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0821 11:43:41.088930 1 shared_informer.go:230] Caches are synced for service config * I0821 11:43:41.088929 1 shared_informer.go:230] Caches are synced for endpoints config * * ==> kube-scheduler [291f69a03353] <== * I0821 11:43:22.875453 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0821 11:43:22.875510 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0821 11:43:24.335680 1 serving.go:313] Generated self-signed cert in-memory * W0821 11:43:27.625828 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0821 11:43:27.626071 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0821 11:43:27.626211 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0821 11:43:27.626279 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0821 11:43:27.645065 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0821 11:43:27.645082 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0821 11:43:27.647030 1 authorization.go:47] Authorization is disabled * W0821 11:43:27.647565 1 authentication.go:40] Authentication is disabled * I0821 11:43:27.647582 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0821 11:43:27.649568 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0821 11:43:27.649734 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0821 11:43:27.649921 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0821 11:43:27.650057 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0821 11:43:27.652532 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0821 11:43:27.652827 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0821 11:43:27.653089 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0821 11:43:27.653361 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0821 11:43:27.653591 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0821 11:43:27.652825 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0821 11:43:27.653970 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0821 11:43:27.654679 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0821 11:43:27.654834 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0821 11:43:27.654942 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0821 11:43:27.655010 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0821 11:43:27.657465 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0821 11:43:27.657631 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0821 11:43:27.658858 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0821 11:43:27.660414 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0821 11:43:27.662010 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0821 11:43:27.665452 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0821 11:43:27.666180 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * I0821 11:43:29.550640 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0821 11:43:30.550662 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0821 11:43:30.562139 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * E0821 11:43:39.167947 1 factory.go:503] pod: kube-system/coredns-66bff467f8-7q9hx is already present in the active queue * E0821 11:43:39.217998 1 factory.go:503] pod: kube-system/coredns-66bff467f8-mql9d is already present in the active queue * * ==> kubelet <== * -- Logs begin at Fri 2020-08-21 11:42:25 UTC, end at Fri 2020-08-21 17:12:24 UTC. -- * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.205071 4237 status_manager.go:158] Starting to sync pod status with apiserver * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.205389 4237 kubelet.go:1821] Starting kubelet main sync loop. * Aug 21 11:43:37 minikube kubelet[4237]: E0821 11:43:37.205533 4237 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] * Aug 21 11:43:37 minikube kubelet[4237]: E0821 11:43:37.314388 4237 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.340380 4237 kubelet_node_status.go:70] Attempting to register node minikube * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.415709 4237 kubelet_node_status.go:112] Node minikube was previously registered * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.415903 4237 kubelet_node_status.go:73] Successfully registered node minikube * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.498774 4237 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-08-21 11:43:37.498752463 +0000 UTC m=+6.954338828 LastTransitionTime:2020-08-21 11:43:37.498752463 +0000 UTC m=+6.954338828 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} * Aug 21 11:43:37 minikube kubelet[4237]: E0821 11:43:37.523117 4237 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.634747 4237 cpu_manager.go:184] [cpumanager] starting with none policy * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.634812 4237 cpu_manager.go:185] [cpumanager] reconciling every 10s * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.634840 4237 state_mem.go:36] [cpumanager] initializing new in-memory state store * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.635099 4237 state_mem.go:88] [cpumanager] updated default cpuset: "" * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.635136 4237 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.635166 4237 policy_none.go:43] [cpumanager] none policy: Start * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.637341 4237 plugin_manager.go:114] Starting Kubelet Plugin Manager * Aug 21 11:43:37 minikube kubelet[4237]: I0821 11:43:37.923544 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.007835 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.049200 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.057239 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d457842984bf7ef6bb49688baf0ee959-etcd-data") pod "etcd-minikube" (UID: "d457842984bf7ef6bb49688baf0ee959") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.057320 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/d9b5ddb15bda65dad6fad9e6c4b108fe-ca-certs") pod "kube-apiserver-minikube" (UID: "d9b5ddb15bda65dad6fad9e6c4b108fe") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.057361 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/d9b5ddb15bda65dad6fad9e6c4b108fe-k8s-certs") pod "kube-apiserver-minikube" (UID: "d9b5ddb15bda65dad6fad9e6c4b108fe") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.057401 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/d9b5ddb15bda65dad6fad9e6c4b108fe-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "d9b5ddb15bda65dad6fad9e6c4b108fe") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.057434 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d457842984bf7ef6bb49688baf0ee959-etcd-certs") pod "etcd-minikube" (UID: "d457842984bf7ef6bb49688baf0ee959") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.088303 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.157749 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-k8s-certs") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.157974 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/5795d0c442cb997ff93c49feeb9f6386-kubeconfig") pod "kube-scheduler-minikube" (UID: "5795d0c442cb997ff93c49feeb9f6386") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.158085 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-kubeconfig") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.158290 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.158513 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-ca-certs") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.158672 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/3016593d20758bbfe68aba26604a8e3d-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "3016593d20758bbfe68aba26604a8e3d") * Aug 21 11:43:38 minikube kubelet[4237]: I0821 11:43:38.361822 4237 reconciler.go:157] Reconciler: start to sync state * Aug 21 11:43:39 minikube kubelet[4237]: I0821 11:43:39.278079 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:39 minikube kubelet[4237]: I0821 11:43:39.367087 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/e543e87b-70f4-4c98-ae45-de29a220be12-xtables-lock") pod "kube-proxy-dzxcz" (UID: "e543e87b-70f4-4c98-ae45-de29a220be12") * Aug 21 11:43:39 minikube kubelet[4237]: I0821 11:43:39.367349 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/e543e87b-70f4-4c98-ae45-de29a220be12-lib-modules") pod "kube-proxy-dzxcz" (UID: "e543e87b-70f4-4c98-ae45-de29a220be12") * Aug 21 11:43:39 minikube kubelet[4237]: I0821 11:43:39.367601 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cm2j9" (UniqueName: "kubernetes.io/secret/e543e87b-70f4-4c98-ae45-de29a220be12-kube-proxy-token-cm2j9") pod "kube-proxy-dzxcz" (UID: "e543e87b-70f4-4c98-ae45-de29a220be12") * Aug 21 11:43:39 minikube kubelet[4237]: I0821 11:43:39.367718 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e543e87b-70f4-4c98-ae45-de29a220be12-kube-proxy") pod "kube-proxy-dzxcz" (UID: "e543e87b-70f4-4c98-ae45-de29a220be12") * Aug 21 11:43:40 minikube kubelet[4237]: W0821 11:43:40.435936 4237 pod_container_deletor.go:77] Container "ec57a7b636ec47481f0ca89564a7579c26db822bce0056a294aa5350bcf72e49" not found in pod's containers * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.679360 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.720918 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.826264 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mzgw5" (UniqueName: "kubernetes.io/secret/5f746a0e-918e-4dc1-9507-cd485858613f-coredns-token-mzgw5") pod "coredns-66bff467f8-7q9hx" (UID: "5f746a0e-918e-4dc1-9507-cd485858613f") * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.826385 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/0006b750-86f9-4368-a09c-db017695c792-tmp") pod "storage-provisioner" (UID: "0006b750-86f9-4368-a09c-db017695c792") * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.826451 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5f746a0e-918e-4dc1-9507-cd485858613f-config-volume") pod "coredns-66bff467f8-7q9hx" (UID: "5f746a0e-918e-4dc1-9507-cd485858613f") * Aug 21 11:43:47 minikube kubelet[4237]: I0821 11:43:47.826475 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-zdm9m" (UniqueName: "kubernetes.io/secret/0006b750-86f9-4368-a09c-db017695c792-storage-provisioner-token-zdm9m") pod "storage-provisioner" (UID: "0006b750-86f9-4368-a09c-db017695c792") * Aug 21 11:43:48 minikube kubelet[4237]: E0821 11:43:48.477226 4237 remote_runtime.go:295] ContainerStatus "423407b0c530fb0fc6a9ad0cad6af609791958bd6b675eda3654be99c42ee2cd" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 423407b0c530fb0fc6a9ad0cad6af609791958bd6b675eda3654be99c42ee2cd * Aug 21 11:43:48 minikube kubelet[4237]: E0821 11:43:48.477417 4237 kuberuntime_manager.go:952] getPodContainerStatuses for pod "storage-provisioner_kube-system(0006b750-86f9-4368-a09c-db017695c792)" failed: rpc error: code = Unknown desc = Error: No such container: 423407b0c530fb0fc6a9ad0cad6af609791958bd6b675eda3654be99c42ee2cd * Aug 21 11:43:48 minikube kubelet[4237]: W0821 11:43:48.584607 4237 pod_container_deletor.go:77] Container "f65a9216ba575bf50e286243e7a44c8c26a8bb76f601398022528e6babc438f4" not found in pod's containers * Aug 21 11:43:48 minikube kubelet[4237]: W0821 11:43:48.585999 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-7q9hx through plugin: invalid network status for * Aug 21 11:43:49 minikube kubelet[4237]: W0821 11:43:49.596765 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-7q9hx through plugin: invalid network status for * Aug 21 11:43:50 minikube kubelet[4237]: I0821 11:43:50.614208 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 11:43:50 minikube kubelet[4237]: I0821 11:43:50.740348 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bc2b2e3a-95ac-4611-835e-7d77a728c359-config-volume") pod "coredns-66bff467f8-mql9d" (UID: "bc2b2e3a-95ac-4611-835e-7d77a728c359") * Aug 21 11:43:50 minikube kubelet[4237]: I0821 11:43:50.740384 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mzgw5" (UniqueName: "kubernetes.io/secret/bc2b2e3a-95ac-4611-835e-7d77a728c359-coredns-token-mzgw5") pod "coredns-66bff467f8-mql9d" (UID: "bc2b2e3a-95ac-4611-835e-7d77a728c359") * Aug 21 11:43:51 minikube kubelet[4237]: W0821 11:43:51.294904 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-mql9d through plugin: invalid network status for * Aug 21 11:43:51 minikube kubelet[4237]: W0821 11:43:51.617737 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-mql9d through plugin: invalid network status for * Aug 21 12:20:39 minikube kubelet[4237]: I0821 12:20:39.061752 4237 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 21 12:20:39 minikube kubelet[4237]: I0821 12:20:39.146729 4237 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-qfqgs" (UniqueName: "kubernetes.io/secret/594402f0-8d07-4679-b5d2-b62737de2aba-nginx-ingress-token-qfqgs") pod "nginx-ingress-controller-6d57c87cb9-4dcvf" (UID: "594402f0-8d07-4679-b5d2-b62737de2aba") * Aug 21 12:20:39 minikube kubelet[4237]: W0821 12:20:39.824437 4237 pod_container_deletor.go:77] Container "0a8777c57e6ff084a8049d73bb0721c4e3bda32a110bd8eafc3dcfb9862368af" not found in pod's containers * Aug 21 12:20:39 minikube kubelet[4237]: W0821 12:20:39.827936 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-4dcvf through plugin: invalid network status for * Aug 21 12:20:40 minikube kubelet[4237]: W0821 12:20:40.830291 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-4dcvf through plugin: invalid network status for * Aug 21 12:22:45 minikube kubelet[4237]: W0821 12:22:45.443051 4237 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/nginx-ingress-controller-6d57c87cb9-4dcvf through plugin: invalid network status for * * ==> storage-provisioner [423407b0c530] <== PS C:\WINDOWS\system32>
tstromberg commented 4 years ago

My apologies: You will need to upgrade to minikube v1.12.x to get the ambassador plugin.