kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

storage-provisioner pod fails to start #8217

Closed arroadie closed 4 years ago

arroadie commented 4 years ago

Minikube from AUR or downloaded latest binary with the same result.

$ uname -a     
Linux SeaMonkey 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 GNU/Linux

kubectlget get pods -A shows that the storage-provisioner is in a crashloop

Steps to reproduce the issue:

  1. minikube start --driver=docker

Full output of failed command:

# thiago @ SeaMonkey in ~/Projects/mycloud [21:27:58] 
$ minikube start --driver=docker --alsologtostderr             
I0519 21:28:13.185141  550291 start.go:99] hostinfo: {"hostname":"SeaMonkey","uptime":211237,"bootTime":1589737656,"procs":613,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"rolling","kernelVersion":"5.6.12-arch1-1","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"e716c7e5-0518-41eb-9aaa-5bd2952c3c38"}
I0519 21:28:13.185924  550291 start.go:109] virtualization: kvm host
* minikube v1.10.1 on Arch rolling
I0519 21:28:13.186136  550291 notify.go:125] Checking for updates...
I0519 21:28:13.186724  550291 driver.go:253] Setting default libvirt URI to qemu:///system
I0519 21:28:13.245124  550291 docker.go:95] docker version: linux-19.03.8-ce
* Using the docker driver based on existing profile
I0519 21:28:13.245200  550291 start.go:215] selected driver: docker
I0519 21:28:13.245207  550291 start.go:594] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[dashboard:false default-storageclass:false storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true]}
I0519 21:28:13.245288  550291 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0519 21:28:13.245300  550291 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
* Starting control plane node minikube in cluster minikube
I0519 21:28:13.245395  550291 cache.go:104] Beginning downloading kic artifacts for docker with docker
I0519 21:28:13.308191  550291 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0519 21:28:13.308227  550291 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0519 21:28:13.308275  550291 preload.go:96] Found local preload: /home/thiago/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0519 21:28:13.308286  550291 cache.go:48] Caching tarball of preloaded images
I0519 21:28:13.308301  550291 preload.go:122] Found /home/thiago/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0519 21:28:13.308310  550291 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0519 21:28:13.308446  550291 profile.go:156] Saving config to /home/thiago/.minikube/profiles/minikube/config.json ...
I0519 21:28:13.309320  550291 cache.go:132] Successfully downloaded all kic artifacts
I0519 21:28:13.309346  550291 start.go:223] acquiring machines lock for minikube: {Name:mkefbf764b909dfd76106b580d09c996962de154 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0519 21:28:13.309467  550291 start.go:227] acquired machines lock for "minikube" in 97.634µs
I0519 21:28:13.309486  550291 start.go:87] Skipping create...Using existing machine configuration
I0519 21:28:13.309494  550291 fix.go:53] fixHost starting: 
I0519 21:28:13.309760  550291 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0519 21:28:13.354868  550291 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0519 21:28:13.354920  550291 fix.go:131] unexpected machine state, will restart: <nil>
* Restarting existing docker container for "minikube" ...
I0519 21:28:13.355798  550291 cli_runner.go:108] Run: docker start minikube
I0519 21:28:13.711765  550291 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0519 21:28:13.763170  550291 kic.go:318] container "minikube" state is running.
I0519 21:28:13.764518  550291 machine.go:86] provisioning docker machine ...
I0519 21:28:13.764559  550291 ubuntu.go:166] provisioning hostname "minikube"
I0519 21:28:13.764642  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:13.821557  550291 main.go:110] libmachine: Using SSH client type: native
I0519 21:28:13.821813  550291 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32791 <nil> <nil>}
I0519 21:28:13.821844  550291 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0519 21:28:13.822289  550291 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58618->127.0.0.1:32791: read: connection reset by peer
I0519 21:28:16.963888  550291 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0519 21:28:16.964017  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:17.017646  550291 main.go:110] libmachine: Using SSH client type: native
I0519 21:28:17.017917  550291 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32791 <nil> <nil>}
I0519 21:28:17.017965  550291 main.go:110] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0519 21:28:17.129094  550291 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0519 21:28:17.129136  550291 ubuntu.go:172] set auth options {CertDir:/home/thiago/.minikube CaCertPath:/home/thiago/.minikube/certs/ca.pem CaPrivateKeyPath:/home/thiago/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/thiago/.minikube/machines/server.pem ServerKeyPath:/home/thiago/.minikube/machines/server-key.pem ClientKeyPath:/home/thiago/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/thiago/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/thiago/.minikube}
I0519 21:28:17.129179  550291 ubuntu.go:174] setting up certificates
I0519 21:28:17.129196  550291 provision.go:82] configureAuth start
I0519 21:28:17.129275  550291 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0519 21:28:17.187179  550291 provision.go:131] copyHostCerts
I0519 21:28:17.187222  550291 exec_runner.go:91] found /home/thiago/.minikube/ca.pem, removing ...
I0519 21:28:17.187731  550291 exec_runner.go:98] cp: /home/thiago/.minikube/certs/ca.pem --> /home/thiago/.minikube/ca.pem (1034 bytes)
I0519 21:28:17.187797  550291 exec_runner.go:91] found /home/thiago/.minikube/cert.pem, removing ...
I0519 21:28:17.187919  550291 exec_runner.go:98] cp: /home/thiago/.minikube/certs/cert.pem --> /home/thiago/.minikube/cert.pem (1078 bytes)
I0519 21:28:17.187964  550291 exec_runner.go:91] found /home/thiago/.minikube/key.pem, removing ...
I0519 21:28:17.188233  550291 exec_runner.go:98] cp: /home/thiago/.minikube/certs/key.pem --> /home/thiago/.minikube/key.pem (1675 bytes)
I0519 21:28:17.188309  550291 provision.go:105] generating server cert: /home/thiago/.minikube/machines/server.pem ca-key=/home/thiago/.minikube/certs/ca.pem private-key=/home/thiago/.minikube/certs/ca-key.pem org=thiago.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0519 21:28:17.498745  550291 provision.go:159] copyRemoteCerts
I0519 21:28:17.498802  550291 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0519 21:28:17.498844  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:17.555416  550291 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/thiago/.minikube/machines/minikube/id_rsa Username:docker}
I0519 21:28:17.646180  550291 ssh_runner.go:215] scp /home/thiago/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0519 21:28:17.668407  550291 ssh_runner.go:215] scp /home/thiago/.minikube/machines/server.pem --> /etc/docker/server.pem (1119 bytes)
I0519 21:28:17.694313  550291 ssh_runner.go:215] scp /home/thiago/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0519 21:28:17.721169  550291 provision.go:85] duration metric: configureAuth took 591.940755ms
I0519 21:28:17.721212  550291 ubuntu.go:190] setting minikube options for container-runtime
I0519 21:28:17.721540  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:17.778270  550291 main.go:110] libmachine: Using SSH client type: native
I0519 21:28:17.778567  550291 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32791 <nil> <nil>}
I0519 21:28:17.778598  550291 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0519 21:28:17.901127  550291 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0519 21:28:17.901177  550291 ubuntu.go:71] root file system type: overlay
I0519 21:28:17.901514  550291 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0519 21:28:17.901636  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:17.962222  550291 main.go:110] libmachine: Using SSH client type: native
I0519 21:28:17.962357  550291 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32791 <nil> <nil>}
I0519 21:28:17.962435  550291 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0519 21:28:18.102646  550291 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0519 21:28:18.102921  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:18.167590  550291 main.go:110] libmachine: Using SSH client type: native
I0519 21:28:18.167896  550291 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32791 <nil> <nil>}
I0519 21:28:18.167963  550291 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0519 21:28:18.303289  550291 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0519 21:28:18.303334  550291 machine.go:89] provisioned docker machine in 4.538793431s
I0519 21:28:18.303369  550291 start.go:186] post-start starting for "minikube" (driver="docker")
I0519 21:28:18.303386  550291 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0519 21:28:18.303517  550291 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0519 21:28:18.303615  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:18.360801  550291 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/thiago/.minikube/machines/minikube/id_rsa Username:docker}
I0519 21:28:18.455079  550291 ssh_runner.go:148] Run: cat /etc/os-release
I0519 21:28:18.458400  550291 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0519 21:28:18.458444  550291 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0519 21:28:18.458473  550291 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0519 21:28:18.458490  550291 info.go:96] Remote host: Ubuntu 19.10
I0519 21:28:18.458509  550291 filesync.go:118] Scanning /home/thiago/.minikube/addons for local assets ...
I0519 21:28:18.458609  550291 filesync.go:118] Scanning /home/thiago/.minikube/files for local assets ...
I0519 21:28:18.458653  550291 start.go:189] post-start completed in 155.2664ms
I0519 21:28:18.458672  550291 fix.go:55] fixHost completed within 5.149176824s
I0519 21:28:18.458687  550291 start.go:74] releasing machines lock for "minikube", held for 5.149205699s
I0519 21:28:18.458791  550291 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0519 21:28:18.510603  550291 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0519 21:28:18.510661  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:18.510660  550291 profile.go:156] Saving config to /home/thiago/.minikube/profiles/minikube/config.json ...
I0519 21:28:18.512016  550291 ssh_runner.go:148] Run: systemctl --version
I0519 21:28:18.512098  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:18.553100  550291 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/thiago/.minikube/machines/minikube/id_rsa Username:docker}
I0519 21:28:18.563231  550291 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/thiago/.minikube/machines/minikube/id_rsa Username:docker}
I0519 21:28:18.634321  550291 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0519 21:28:18.648078  550291 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0519 21:28:18.648173  550291 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0519 21:28:18.663622  550291 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0519 21:28:18.753661  550291 ssh_runner.go:148] Run: sudo systemctl start docker
I0519 21:28:18.762842  550291 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
* Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0519 21:28:18.818078  550291 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0519 21:28:18.855423  550291 cli_runner.go:108] Run: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" 8655f39f7585
I0519 21:28:18.895424  550291 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0519 21:28:18.895445  550291 start.go:251] checking
I0519 21:28:18.895517  550291 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0519 21:28:18.898135  550291 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1  host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
  - kubeadm.pod-network-cidr=10.244.0.0/16
I0519 21:28:18.906171  550291 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0519 21:28:18.906220  550291 preload.go:96] Found local preload: /home/thiago/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0519 21:28:18.906319  550291 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0519 21:28:18.957160  550291 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0519 21:28:18.957189  550291 docker.go:317] Images already preloaded, skipping extraction
I0519 21:28:18.957251  550291 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0519 21:28:18.988704  550291 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0519 21:28:18.988739  550291 cache_images.go:69] Images are preloaded, skipping loading
I0519 21:28:18.988800  550291 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.2 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0519 21:28:18.988935  550291 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.2:10249

I0519 21:28:18.989101  550291 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0519 21:28:19.037514  550291 kubeadm.go:737] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0519 21:28:19.037585  550291 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.2
I0519 21:28:19.046290  550291 binaries.go:43] Found k8s binaries, skipping transfer
I0519 21:28:19.046403  550291 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0519 21:28:19.055671  550291 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0519 21:28:19.079941  550291 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (532 bytes)
I0519 21:28:19.105771  550291 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0519 21:28:19.132099  550291 start.go:251] checking
I0519 21:28:19.132206  550291 ssh_runner.go:148] Run: grep 172.17.0.2   control-plane.minikube.internal$ /etc/hosts
I0519 21:28:19.135923  550291 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0519 21:28:19.147193  550291 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0519 21:28:19.213677  550291 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0519 21:28:19.228523  550291 certs.go:52] Setting up /home/thiago/.minikube/profiles/minikube for IP: 172.17.0.2
I0519 21:28:19.228601  550291 certs.go:169] skipping minikubeCA CA generation: /home/thiago/.minikube/ca.key
I0519 21:28:19.228635  550291 certs.go:169] skipping proxyClientCA CA generation: /home/thiago/.minikube/proxy-client-ca.key
I0519 21:28:19.228726  550291 certs.go:263] skipping minikube-user signed cert generation: /home/thiago/.minikube/profiles/minikube/client.key
I0519 21:28:19.228767  550291 certs.go:267] generating minikube signed cert: /home/thiago/.minikube/profiles/minikube/apiserver.key.7b749c5f
I0519 21:28:19.228784  550291 crypto.go:69] Generating cert /home/thiago/.minikube/profiles/minikube/apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0519 21:28:19.533717  550291 crypto.go:157] Writing cert to /home/thiago/.minikube/profiles/minikube/apiserver.crt.7b749c5f ...
I0519 21:28:19.533745  550291 lock.go:35] WriteFile acquiring /home/thiago/.minikube/profiles/minikube/apiserver.crt.7b749c5f: {Name:mk042a887f95fdc198fe5409069b9332753be43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 21:28:19.534405  550291 crypto.go:165] Writing key to /home/thiago/.minikube/profiles/minikube/apiserver.key.7b749c5f ...
I0519 21:28:19.534417  550291 lock.go:35] WriteFile acquiring /home/thiago/.minikube/profiles/minikube/apiserver.key.7b749c5f: {Name:mk8f45533015b702b527331f3443373042c5fb91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 21:28:19.534565  550291 certs.go:278] copying /home/thiago/.minikube/profiles/minikube/apiserver.crt.7b749c5f -> /home/thiago/.minikube/profiles/minikube/apiserver.crt
I0519 21:28:19.534664  550291 certs.go:282] copying /home/thiago/.minikube/profiles/minikube/apiserver.key.7b749c5f -> /home/thiago/.minikube/profiles/minikube/apiserver.key
I0519 21:28:19.534730  550291 certs.go:263] skipping aggregator signed cert generation: /home/thiago/.minikube/profiles/minikube/proxy-client.key
I0519 21:28:19.534801  550291 certs.go:342] found cert: /home/thiago/.minikube/certs/home/thiago/.minikube/certs/ca-key.pem (1675 bytes)
I0519 21:28:19.534829  550291 certs.go:342] found cert: /home/thiago/.minikube/certs/home/thiago/.minikube/certs/ca.pem (1034 bytes)
I0519 21:28:19.534851  550291 certs.go:342] found cert: /home/thiago/.minikube/certs/home/thiago/.minikube/certs/cert.pem (1078 bytes)
I0519 21:28:19.534870  550291 certs.go:342] found cert: /home/thiago/.minikube/certs/home/thiago/.minikube/certs/key.pem (1675 bytes)
I0519 21:28:19.535499  550291 ssh_runner.go:215] scp /home/thiago/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0519 21:28:19.553430  550291 ssh_runner.go:215] scp /home/thiago/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0519 21:28:19.577583  550291 ssh_runner.go:215] scp /home/thiago/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0519 21:28:19.604826  550291 ssh_runner.go:215] scp /home/thiago/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0519 21:28:19.628636  550291 ssh_runner.go:215] scp /home/thiago/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0519 21:28:19.656280  550291 ssh_runner.go:215] scp /home/thiago/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0519 21:28:19.680714  550291 ssh_runner.go:215] scp /home/thiago/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0519 21:28:19.707708  550291 ssh_runner.go:215] scp /home/thiago/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0519 21:28:19.730022  550291 ssh_runner.go:215] scp /home/thiago/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0519 21:28:19.749646  550291 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0519 21:28:19.774961  550291 ssh_runner.go:148] Run: openssl version
I0519 21:28:19.782490  550291 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0519 21:28:19.794947  550291 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0519 21:28:19.799466  550291 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 May 19 23:24 /usr/share/ca-certificates/minikubeCA.pem
I0519 21:28:19.799561  550291 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0519 21:28:19.807192  550291 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0519 21:28:19.815072  550291 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[dashboard:false default-storageclass:false storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true]}
I0519 21:28:19.815325  550291 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0519 21:28:19.876109  550291 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0519 21:28:19.882967  550291 kubeadm.go:304] found existing configuration files, will attempt cluster restart
I0519 21:28:19.882997  550291 kubeadm.go:488] restartCluster start
I0519 21:28:19.883078  550291 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0519 21:28:19.892714  550291 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
W0519 21:28:19.895813  550291 kubeadm.go:347] Overriding stale ClientConfig host https://172.17.0.3:8443 with https://172.17.0.2:8443
I0519 21:28:19.898703  550291 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0519 21:28:19.907633  550291 kubeadm.go:456] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml  2020-05-20 04:13:25.416811075 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new  1901-12-13 20:45:52.000000000 +0000
@@ -1,7 +1,7 @@
 apiVersion: kubeadm.k8s.io/v1beta2
 kind: InitConfiguration
 localAPIEndpoint:
-  advertiseAddress: 172.17.0.3
+  advertiseAddress: 172.17.0.2
   bindPort: 8443
 bootstrapTokens:
   - groups:
@@ -14,13 +14,13 @@
   criSocket: /var/run/dockershim.sock
   name: "minikube"
   kubeletExtraArgs:
-    node-ip: 172.17.0.3
+    node-ip: 172.17.0.2
   taints: []
 ---
 apiVersion: kubeadm.k8s.io/v1beta2
 kind: ClusterConfiguration
 apiServer:
-  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
+  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
   extraArgs:
     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
 certificatesDir: /var/lib/minikube/certs
@@ -49,4 +49,4 @@
 apiVersion: kubeproxy.config.k8s.io/v1alpha1
 kind: KubeProxyConfiguration
 clusterCIDR: "10.244.0.0/16"
-metricsBindAddress: 172.17.0.3:10249
+metricsBindAddress: 172.17.0.2:10249

-- /stdout --
I0519 21:28:19.907700  550291 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0519 21:28:19.918859  550291 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5495 May 20 04:13 /etc/kubernetes/admin.conf
-rw------- 1 root root 5531 May 20 04:13 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 May 20 04:13 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5475 May 20 04:13 /etc/kubernetes/scheduler.conf

I0519 21:28:19.918910  550291 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0519 21:28:19.927087  550291 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0519 21:28:19.937531  550291 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0519 21:28:19.946280  550291 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0519 21:28:19.955597  550291 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0519 21:28:19.966539  550291 kubeadm.go:549] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0519 21:28:19.966573  550291 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0519 21:28:20.042741  550291 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0519 21:28:20.980555  550291 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0519 21:28:21.072871  550291 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0519 21:28:21.149750  550291 api_server.go:47] waiting for apiserver process to appear ...
I0519 21:28:21.149851  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:21.661017  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:22.161035  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:22.661010  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:23.161050  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:23.661015  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:24.161038  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:24.661038  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:25.161033  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:25.661021  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:26.161063  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:26.660967  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:27.160946  550291 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0519 21:28:27.174306  550291 api_server.go:67] duration metric: took 6.024560045s to wait for apiserver process to appear ...
I0519 21:28:27.174328  550291 api_server.go:83] waiting for apiserver healthz status ...
I0519 21:28:27.174343  550291 api_server.go:193] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0519 21:28:31.996162  550291 api_server.go:213] https://172.17.0.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
W0519 21:28:31.996220  550291 api_server.go:98] status: https://172.17.0.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
I0519 21:28:32.496352  550291 api_server.go:193] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0519 21:28:32.500947  550291 api_server.go:213] https://172.17.0.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0519 21:28:32.500986  550291 api_server.go:98] status: https://172.17.0.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0519 21:28:32.996424  550291 api_server.go:193] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0519 21:28:33.005373  550291 api_server.go:213] https://172.17.0.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0519 21:28:33.005442  550291 api_server.go:98] status: https://172.17.0.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0519 21:28:33.496395  550291 api_server.go:193] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0519 21:28:33.501891  550291 api_server.go:213] https://172.17.0.2:8443/healthz returned 200:
ok
I0519 21:28:33.511453  550291 api_server.go:136] control plane version: v1.18.2
I0519 21:28:33.511489  550291 api_server.go:126] duration metric: took 6.337144607s to wait for apiserver health ...
I0519 21:28:33.511513  550291 system_pods.go:43] waiting for kube-system pods to appear ...
I0519 21:28:33.531108  550291 system_pods.go:61] 6 kube-system pods found
I0519 21:28:33.531157  550291 system_pods.go:63] "coredns-66bff467f8-2tsns" [42514198-da8d-456d-987c-751fd0ad02b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0519 21:28:33.531183  550291 system_pods.go:63] "coredns-66bff467f8-btbx2" [c9671cc0-fb3e-41d0-8964-00b2b2cbeb39] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0519 21:28:33.531201  550291 system_pods.go:63] "kube-controller-manager-minikube" [b45c13e7-382d-4a7c-bb2f-1bf908e362dd] Running
I0519 21:28:33.531219  550291 system_pods.go:63] "kube-proxy-c8zvd" [599dc9a5-c86a-4a10-adc9-c39cf3bc7eee] Running
I0519 21:28:33.531232  550291 system_pods.go:63] "kube-scheduler-minikube" [48c3ee61-b76c-4e30-8fcd-eb796e630803] Running
I0519 21:28:33.531244  550291 system_pods.go:63] "storage-provisioner" [1badcdfe-ac4e-4368-b40e-dabdd4b22615] Running
I0519 21:28:33.531255  550291 system_pods.go:74] duration metric: took 19.724465ms to wait for pod list to return data ...
I0519 21:28:33.531268  550291 node_conditions.go:99] verifying NodePressure condition ...
I0519 21:28:33.536352  550291 node_conditions.go:111] node storage ephemeral capacity is 51343840Ki
I0519 21:28:33.536395  550291 node_conditions.go:112] node cpu capacity is 16
I0519 21:28:33.536422  550291 node_conditions.go:102] duration metric: took 5.143357ms to run NodePressure ...
I0519 21:28:33.536463  550291 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0519 21:28:33.810658  550291 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0519 21:28:33.815197  550291 ops.go:35] apiserver oom_adj: -16
I0519 21:28:33.815216  550291 kubeadm.go:492] restartCluster took 13.93220394s
I0519 21:28:33.815229  550291 kubeadm.go:295] StartCluster complete in 14.000164935s
I0519 21:28:33.815247  550291 settings.go:123] acquiring lock: {Name:mk40198965f06790c39ae268bcd69545fe939fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 21:28:33.815322  550291 settings.go:131] Updating kubeconfig:  /home/thiago/.kube/config
I0519 21:28:33.817109  550291 lock.go:35] WriteFile acquiring /home/thiago/.kube/config: {Name:mkc7200b3def77479c11b78c0f61316033eeaf50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0519 21:28:33.817296  550291 addons.go:320] enableAddons start: toEnable=map[dashboard:false default-storageclass:false storage-provisioner:true], additional=[]
I0519 21:28:33.817346  550291 addons.go:50] Setting storage-provisioner=true in profile "minikube"
I0519 21:28:33.817365  550291 addons.go:126] Setting addon storage-provisioner=true in "minikube"
W0519 21:28:33.817373  550291 addons.go:135] addon storage-provisioner should already be in state true
I0519 21:28:33.817387  550291 host.go:65] Checking if "minikube" exists ...
I0519 21:28:33.817764  550291 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0519 21:28:33.853767  550291 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0519 21:28:33.853788  550291 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0519 21:28:33.853841  550291 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0519 21:28:33.890295  550291 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/thiago/.minikube/machines/minikube/id_rsa Username:docker}
I0519 21:28:33.977072  550291 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
* Enabled addons: storage-provisioner
I0519 21:28:34.242959  550291 addons.go:322] enableAddons completed in 425.670004ms
* Done! kubectl is now configured to use "minikube"
I0519 21:28:34.463876  550291 start.go:378] kubectl: 1.18.2, cluster: 1.18.2 (minor skew: 0)

Optional: Full output of minikube logs command:

# thiago @ SeaMonkey in ~/Projects/mycloud [21:32:52] $ minikube logs * ==> Docker <== * -- Logs begin at Wed 2020-05-20 04:28:14 UTC, end at Wed 2020-05-20 04:32:56 UTC. -- * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.179631611Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.179645717Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.179656828Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.179700871Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000843460, CONNECTING" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.179708204Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.180256494Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000843460, READY" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182472364Z" level=info msg="parsed scheme: \"unix\"" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182494516Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182528159Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182546052Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182611726Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000843a20, CONNECTING" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182635961Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.182892613Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000843a20, READY" module=grpc * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.185376868Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.195247491Z" level=warning msg="Your kernel does not support cgroup rt period" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.195270304Z" level=warning msg="Your kernel does not support cgroup rt runtime" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.195276916Z" level=warning msg="Your kernel does not support cgroup blkio weight" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.195282587Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.195383386Z" level=info msg="Loading containers: start." * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.284120009Z" level=warning msg="Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.6.12-arch1-1/modules.dep.bin'\nmodprobe: WARNING: Module nf_nat not found in directory /lib/modules/5.6.12-arch1-1`, error: exit status 1" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.286427982Z" level=warning msg="Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.6.12-arch1-1/modules.dep.bin'\nmodprobe: WARNING: Module xt_conntrack not found in directory /lib/modules/5.6.12-arch1-1`, error: exit status 1" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.417683039Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.487319869Z" level=info msg="Loading containers: done." * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.504332232Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.504710432Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.504836509Z" level=info msg="Daemon has completed initialization" * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.530254053Z" level=info msg="API listen on [::]:2376" * May 20 04:28:14 minikube systemd[1]: Started Docker Application Container Engine. * May 20 04:28:14 minikube dockerd[111]: time="2020-05-20T04:28:14.530270394Z" level=info msg="API listen on /var/run/docker.sock" * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.606798585Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a333d99a06a7440fe38198ca7c4d1449f9c02a4bdb67e046751add35e1f8c4d7.sock debug=false pid=1360 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.628180582Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8b8bbd549a638424bbd52a98edff6e34895d3007c5ab10f258394833420d5a11.sock debug=false pid=1376 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.637740321Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8b8ba1df0e677efc06aac42c839b16e1b9c3710c030c4ebba154d90f1b54d06d.sock debug=false pid=1400 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.638016971Z" level=info msg="shim containerd-shim started" address=/containerd-shim/effdda764d3e5c855891cab9b6980a5a87fe91a2139dc39b3fe54b189c2d6b5c.sock debug=false pid=1401 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.822670508Z" level=info msg="shim containerd-shim started" address=/containerd-shim/9fd9d92827ee31a441aac1e0357446616b08a6f4e4822dee4697fc3e15533432.sock debug=false pid=1507 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.828994192Z" level=info msg="shim containerd-shim started" address=/containerd-shim/751427273b3ac778feae935653ade63f962661d0007f1d61e987e180315174d3.sock debug=false pid=1521 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.836151881Z" level=info msg="shim containerd-shim started" address=/containerd-shim/dedae3463ec268115b70422c925a7ee56d132a4acc9ad1bcb71149079795860b.sock debug=false pid=1539 * May 20 04:28:26 minikube dockerd[111]: time="2020-05-20T04:28:26.878706262Z" level=info msg="shim containerd-shim started" address=/containerd-shim/704c787c3f466b8e415ca2481da292ec0d0d4cb1f22b12ef1cf78e684c3746da.sock debug=false pid=1555 * May 20 04:28:32 minikube dockerd[111]: time="2020-05-20T04:28:32.701250339Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bd1ac53e6b253b240b3d49162ce0f8ca826a6295d62f55318777e412cd5a6590.sock debug=false pid=2255 * May 20 04:28:32 minikube dockerd[111]: time="2020-05-20T04:28:32.713203632Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a3c0a3e5309cdcd27a59387565017413329ead3c2d48cf1d2c422b6253526b75.sock debug=false pid=2271 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.193543564Z" level=info msg="shim containerd-shim started" address=/containerd-shim/70d9acc5f2f38f5b8d19722c22c329493c94945190fe0eeebe4c8f752127e875.sock debug=false pid=2358 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.217489165Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ac6951a7cc9ba3d52e336438bb16fbf0aedf58cc24f1663fd18028fd3efc3ab9.sock debug=false pid=2375 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.293140960Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c865b8117dba38645ec0c7a960c939ab1fd60209b0efb775b59414f2e20b78c8.sock debug=false pid=2410 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.548104132Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c5f9b2d412df53672b5cea780127a1766a961f606ec9564a12d962c0b669a134.sock debug=false pid=2549 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.589995678Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4a5123e78ae397de41d524f506e025c83d5ae1dafc9538fbeb12c17e2835ade4.sock debug=false pid=2584 * May 20 04:28:33 minikube dockerd[111]: time="2020-05-20T04:28:33.779036783Z" level=info msg="shim containerd-shim started" address=/containerd-shim/54c9ba085b4b5ec5a265fdd2b1e832e3b776e43e7e776029ca9672ff7c0cb5ec.sock debug=false pid=2660 * May 20 04:29:04 minikube dockerd[111]: time="2020-05-20T04:29:04.056606060Z" level=info msg="shim reaped" id=e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666 * May 20 04:29:04 minikube dockerd[111]: time="2020-05-20T04:29:04.068165395Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * May 20 04:29:04 minikube dockerd[111]: time="2020-05-20T04:29:04.068343569Z" level=warning msg="e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666/mounts/shm, flags: 0x2: no such file or directory" * May 20 04:29:17 minikube dockerd[111]: time="2020-05-20T04:29:17.792260354Z" level=info msg="shim containerd-shim started" address=/containerd-shim/60fe6e6dd15ee0d12767995f60bfd018236bdc852ff65e2aa3bad3a0a12c2c05.sock debug=false pid=3122 * May 20 04:29:48 minikube dockerd[111]: time="2020-05-20T04:29:48.060156154Z" level=info msg="shim reaped" id=ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 * May 20 04:29:48 minikube dockerd[111]: time="2020-05-20T04:29:48.071621622Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * May 20 04:29:48 minikube dockerd[111]: time="2020-05-20T04:29:48.071782574Z" level=warning msg="ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269/mounts/shm, flags: 0x2: no such file or directory" * May 20 04:30:13 minikube dockerd[111]: time="2020-05-20T04:30:13.799673698Z" level=info msg="shim containerd-shim started" address=/containerd-shim/65a8283bcb1f03ef81022fc5ccc7e2750a1152314d83c6778fa2501fa88625c5.sock debug=false pid=3423 * May 20 04:30:44 minikube dockerd[111]: time="2020-05-20T04:30:44.024408797Z" level=info msg="shim reaped" id=98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:30:44 minikube dockerd[111]: time="2020-05-20T04:30:44.035653590Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * May 20 04:30:44 minikube dockerd[111]: time="2020-05-20T04:30:44.035711940Z" level=warning msg="98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00/mounts/shm, flags: 0x2: no such file or directory" * May 20 04:31:24 minikube dockerd[111]: time="2020-05-20T04:31:24.821013481Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f8c91097e248ac6e7bb1c80f6cd04a58eb94e61fda93b6e76b8da0e09fcdde6e.sock debug=false pid=3772 * May 20 04:31:55 minikube dockerd[111]: time="2020-05-20T04:31:55.081349956Z" level=info msg="shim reaped" id=7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:31:55 minikube dockerd[111]: time="2020-05-20T04:31:55.092057921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * May 20 04:31:55 minikube dockerd[111]: time="2020-05-20T04:31:55.092198004Z" level=warning msg="7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb/mounts/shm, flags: 0x2: no such file or directory" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 7d010a3bfea3f 4689081edb103 About a minute ago Exited storage-provisioner 8 153149c2ccc2f * bbf03c75c53ec 0d40868643c69 4 minutes ago Running kube-proxy 1 4037c83104eea * 5714f14f1c18f 67da37a9a360e 4 minutes ago Running coredns 1 edc63fd01a962 * f0f301082962e 67da37a9a360e 4 minutes ago Running coredns 1 a7dcd91547966 * df4677503ce2c 303ce5db0e90d 4 minutes ago Running etcd 0 8fe158a9cbbce * 3ee154244bf17 ace0a8c17ba90 4 minutes ago Running kube-controller-manager 1 c2ff20a0d5577 * 24f29814e3e86 a3099161e1375 4 minutes ago Running kube-scheduler 1 33ef826c386e4 * 2772f112cd16b 6ed75ad404bdd 4 minutes ago Running kube-apiserver 0 20130b1d0ffd6 * 552fe6e2fdc80 67da37a9a360e 18 minutes ago Exited coredns 0 4c72db1606711 * e4dad40ccc8e6 67da37a9a360e 18 minutes ago Exited coredns 0 6a4d3e6a68c23 * 8409b7473f0ca 0d40868643c69 18 minutes ago Exited kube-proxy 0 9a8f044951e04 * 19bb303580521 ace0a8c17ba90 19 minutes ago Exited kube-controller-manager 0 8fc23571e6b61 * 53cf51b59b66b a3099161e1375 19 minutes ago Exited kube-scheduler 0 326ab633ca899 * * ==> coredns [552fe6e2fdc8] <== * E0520 04:25:24.743681 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:25:55.742122 1 trace.go:116] Trace[1465987202]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:25.741663487 +0000 UTC m=+682.027777131) (total time: 30.00042387s): * Trace[1465987202]: [30.00042387s] [30.00042387s] END * E0520 04:25:55.742146 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.g[:105: Failed to lIist N*v1F.NOame]spa ce:p Gelt uhttgps:i//n10/.r96e.0a.1:44y3/:api /v1S/ntamilespa cesw?laimiit=5t00i&rnesogur ceVoernsi:on =0:" dkialu tcbp e10.r96.n0.1e:4t43: is/o" * timeout * I0520 04:25:55.743164 1 trace.go:116] Tra[ce[2I1022N347F8O3]]: "pRleuflgectiorn ListAndWatch" name:/pkgr/meoda/k8ds.iy:o/ cliSentt-goi@v0l.1l7.2 /toowls/caichet/rieflecgto r.goo:n105: ( tarted: 2020" 04k:u25:b25e.74r2n73e3857 e+0s000" U * TC m=+682.028847501) ([tIotal FtiOme] p 3l0.0ugin/ready: Still00400165s): * Trace[2102234783]: [30.000400165s] [30.000400165s] END * E0520 04:25:55.743183 1 re flewactior.tgio:n15g3] pokgn/m:od "k/uk8sb.ie/crlinenet-gto@ev0s"1 * 7.2/tools/cache/re[fleIctoNr.gFo:1O05]: Faileld utogliist n*v/1.Erndpeoiantsd: yGet: h ttpSs:t//10i.9l6.0.1 :4w43a/apit/ivn1g /oennd:po in"ts?luimibt=e500r&rensoeurtceeVesrsion * =0: dial tcp 10[.9I6.N0.1F:4O43] i /op ltiumeoguit * n/ready: Still waiting onI:05 20"k u04:2e5:r55n.7e44017es " * 1 trace.go:116] Trace[1980435746]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/ca[chIeN/reFflOec]tor.pgol:10u5 (gstiarnte/d: rea2d0y20-:05 St-i2l0 l04: 25w:25a.i7t43i78n68g94 +0o00n0 "UTkC m=+b68e2.0r299n00e5t08e) s(tota * l time: 30.0002[14I797Ns)F: * O]T race[l198u04g357i46n]:/ r[3e0.00d02y14:79 7sS] t[3i0.000l21 47w97asi] tENDi * ng oE052:0 04:2k5:u55.74e40r2n6 e s1 "ref * lector.go:153[] IpNkg/mod/Ok8s].io /clipentl-gou@v0g.1irvicesn?li/mitr=500e&resaourdceVyrsi:on=0 : dSial ttcpi 10l.96.0. 1:4w43: ai/oi titmeoiut * ngI052 0 0o4:26:26.74 321"7 1 btraece.go:116e Trtacee[19s507"156 * 3]: "Reflector ListAndW[atcIh" nNameF:pkgO/mo]d/k 8pl.iuo/cliiennt-g/o@rv0.e17a.2d/tooyls/:ca chSe/treifllecltor .go:1a05 i(sttarited: 20n20-05g on: "kubernetes" * -20 04:25:56.742270293 +0000 UTC m=+713.028383947) (total time: 30.000902663s): * Trace[195071563]: [30.000902663s] [30.000902663s] END * E0520 04:26:26.743238 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:26.744088 1 trace.go:116] Trace[1059014376]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:56.743608445 +0000 UTC m=+713.029722109) (total time: 30.0004[52419Is)N: * FO]T rapce[10u5g90i14376]: [3n0./0r00e45a24d19ys] [3 0.S00t04i52l41l9s] END * Ew05a20i 0t4:i26:n26.74g41 03o n 1: r e"flkecutobr.geo:1r53n] epktg/emosd/"k8 * s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:44[: i/oI tNimFeoOut] * plI0u520g 04:2i6:n26/.7r44e73a1 d y : 1 Strtaicel.glo: 11w6]a Tiracei[1n99g06 890o02n]:: "Re"flkectuor bLiestrAnndWaetcth" enamse:"pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:56.744346921 +0000 UTC m=+713.030460676) (total time: 30.000351389s): * Trace[1990689002]: [30.000351389s] [30.000351389s] END * E0520 04:26:26.744746 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.743845 1 trace.go:116] Trace[2050729718]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: [TrIacNeF[2O05]07 29p718l]: u[3g0.i000n431/856rse] a[30.000:43 18S56st] iElNlD * waEi0t52i0 n04g:2 6:5o7.74:38 77" k u b 1e rrefnleectotr.egos:1"53 * ] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dia[INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] SIGTERM: Shutting down servers then terminating * [INFO] plugin/health: Going into lameduck mode for 5s * l tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.744798 1 trace.go:116] Trace[747225447]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:27.744401375 +0000 UTC m=+744.030515059) (total time: 30.00035933s): * Trace[747225447]: [30.00035933s] [30.00035933s] END * E0520 04:26:57.744816 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.745816 1 trace.go:116] Trace[1483565094]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:27.745497042 +0000 UTC m=+744.031610636) (total time: 30.000285361s): * Trace[1483565094]: [30.000285361s] [30.000285361s] END * E0520 04:26:57.745834 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.744352 1 trace.go:116] Trace[1526661577]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.74401147 +0000 UTC m=+775.030125095) (total time: 30.000318018s): * Trace[1526661577]: [30.000318018s] [30.000318018s] END * E0520 04:27:28.744368 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.745345 1 trace.go:116] Trace[1210707463]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.745026987 +0000 UTC m=+775.031140621) (total time: 30.000302079s): * Trace[1210707463]: [30.000302079s] [30.000302079s] END * E0520 04:27:28.745356 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.746474 1 trace.go:116] Trace[1394767996]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.74608856 +0000 UTC m=+775.032202204) (total time: 30.000351312s): * Trace[1394767996]: [30.000351312s] [30.000351312s] END * E0520 04:27:28.746495 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:52.277969 1 trace.go:116] Trace[1184906420]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.746605926 +0000 UTC m=+806.032719570) (total time: 22.531300578s): * Trace[1184906420]: [22.531300578s] [22.531300578s] END * I0520 04:27:52.278022 1 trace.go:116] Trace[1623118623]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.745467368 +0000 UTC m=+806.031581002) (total time: 22.532540165s): * Trace[1623118623]: [22.532540165s] [22.532540165s] END * I0520 04:27:52.278061 1 trace.go:116] Trace[867160953]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.744463934 +0000 UTC m=+806.030577568) (total time: 22.533583434s): * Trace[867160953]: [22.533583434s] [22.533583434s] END * * ==> coredns [5714f14f1c18] <== * I0520 04:30:36.384569 1 trace.go:116] Trace[1106410694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:06.384086397 +0000 UTC m=+93.015749930) (total time: 30.000437355s): * Trace[1106410694]: [30.000437355s] [30.000437355s] END * E0520 04:30:36.384596 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/a[pi/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:30:36.385513 1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:06.385138282 +0000 UTC m=+93.016801765) (total time: 30.000342107s): * Trace[1747278511]: [30.000342107s] [30.000342107s] END * E0520 04:30:36.385530 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:30:36.386641 1 trace.go:116] Trace[460128162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:06.386258796 +0000 UTC m=+93.017922359) (total time: 30.000320606s): * Trace[460128162]: [30.000320606s] [30.000320606s] END * E0520 04:30:36.386663 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o tIiNmFeoO]u tp * lugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * I0520 04:31:07.385189 1 trace.go[:I1NF1O6]] pTlruagcien[/8r1e7a4d5y: St5i0l89]:l "wRaeiftliencgt oorn :L i"sktuAbnedrWnatecthe"s n"a * me:pkg[/ImNoFdO/]k8s.io/client-go@v0.17.2/tools/cache/reflector.go:plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * 105 (started: 2020-05-20 04:30:37.384700676 +0000 UTC m=+124.016364229) (total time: 30.000449232s): * Trace[817455089]: [30.000449232s] [30.000449232s] END * E0520 04:31:07.385212 [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:07.386156 1 trace.go:116] Trace[683024728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:37.385792786 +0000 UTC m=+124.017456299) (total time: 30.00033412I6s):N * FOT]rac e[p683l02u472g8]:i [n30.0/003r3eady: Still waiting on: 12"6sk] [u30.000334126s] END * E0520 04:31:07.386174 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.ernetes" * 96.0.1:443: i/o time[oIutN * FO] pI05l20u gin04/:31:0a7d.38y722:1 S t 1i trlacle.g o:w11a6] iTratcien[1g0 o69n3: "32k74u]b: e"rRnefleectteosr" L * istAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:37.386788516 +0000 UTC m=+124.018452109) (total time: 30.000409718s): * Trace[1006933274]: [30.000409718s] [30.000409718s] END * E0520 04:31:07.387237 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.385682 1 trace.go:116] Trace[607811211]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.38531574 +0000 UTC m=+155.016979303) (total time: 30.000327022s): * Trace[607811211]: [30.000327022s] [30.000327022s] END * E0520 04:31:38.385703 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.386782 1 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.386376682 +0000 UTC m=+155.018040195) (total time: 30.00037398s): * Trace[629431445]: [30.00037398s] [30.00037398s] END * E0520 04:31:38.386803 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.387801 1 trace.go:116] Trace[1458323237]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.387466468 +0000 UTC m=+155.019129982) (total time: 30.000308216s): * Trace[1458323237]: [30.000308216s] [30.000308216s] END * E0520 04:31:38.387815 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.386221 1 trace.go:116] Trace[469339106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:39.385814235 +0000 UTC m=+186.017477748) (total time: 30.000368285s): * Trace[469339106]: [30.000368285s] [30.000368285s] END * E0520 04:32:09.386245 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.387302 1 trace.go:116] Trace[436340495]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:39.386923107 +0000 UTC m=+186.018586640) (total time: 30.000353131s): * Trace[436340495]: [30.000353131s] [30.000353131s] END * E0520 04:32:09.387319 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.388366 1 trace.go:116] Trace[774965466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:39.387938203 +0000 UTC m=+186.019601736) (total time: 30.000392599s): * Trace[774965466]: [30.000392599s] [30.000392599s] END * E0520 04:32:09.388384 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.386843 1 trace.go:116] Trace[1225511528]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.386334651 +0000 UTC m=+217.017998174) (total time: 30.000460397s): * Trace[1225511528]: [30.000460397s] [30.000460397s] END * E0520 04:32:40.386875 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.387782 1 trace.go:116] Trace[1852186258]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.387405196 +0000 UTC m=+217.019068759) (total time: 30.000344157s): * Trace[1852186258]: [30.000344157s] [30.000344157s] END * E0520 04:32:40.387812 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.388885 1 trace.go:116] Trace[629458047]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.388520784 +0000 UTC m=+217.020184298) (total time: 30.000332364s): * Trace[629458047]: [30.000332364s] [30.000332364s] END * E0520 04:32:40.388905 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * * ==> coredns [e4dad40ccc8e] <== * E0520 04:25:24.725562 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * I0520 04:25:55.723637 1 trace.go:116] Trace[1465987202]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:25.723177297 +0000 UTC m=+682.025279341) (total time: 30.000437606s): * Trace[1465987202]: [30.000437606s] [30.000437606s] END * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * E0520 04:25:55.723649 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:25:55.725001 1 trace.go:116] Trace[2102234783]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:25.724582977 +0000 UTC m=+682.026684991) (total time: 30.000380307s): * Trace[2102234783]: [30.000380307s] [30.000380307s] END * E0520 04:25:55.725021 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:25:55.726080 1 trace.go:[116INFO] plugi] nTra/ce[r198043a574d6]:y "R:efl ectSor LiistAndlWat chw" namiet:pikg/nmogd/k 8s.ino/c:lien"t-gok@v0.17.2/torolsn/ceachtees/r"efl * ector.go:105 (stOar]te d: p202l0-0u5-2g0 i04:n25:/25r.7e256a904d76 y+00:00 USTC tm=i+l682.0 27w792a540i) (toitalng ti me:o 30n.0:003 58"767ks):u * beTracer[19n804e357t46]e: [s30." * 000358767s] [30.000358767s] END * [E052I0 04N:25F:55.726 097p l ug i 1 r/eflrecteora.god:1y53]: pkgS/motd/k8sl.io/c liewnta-gio@vt0.i1n7g. 2o/nto:o l"s/cacehren/erteefsl"ector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:26.724156 1 trace.go:116] Trace[195071563]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:56.72375595 +0000 UTC m=+713.025857974) (total time: 30.000371727s): * Trace[195071563]: [30.000371727s] [30.000371727s] END * E0520 04:26:26.724174 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflec[tor.gIo:105F: OFa]il edp tlo uligsti *nv1/.Erndepoadiyn:ts : SGett ihtltpsl:/ /1w0.a96i.0t.1i:4n43g/a pio/nv1:/ endpo"ikntus?bleimritn=5e00t&reessou"rc * eVersion=0: dial tcp 10.96.0[.1I:4N4F3:O i]/o tpimleouutg * iIn05/20r 0e4:a26d:2y6.:72 54S27t i l l 1 twraacei.gto:i11n6]g T raocen:[10 59"0k143u7b6]e: r"Rnefetleecstor L * istAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:56.72513034 +0000 UTC m=+713.027232335) (total time: 30.000279884s): * Trace[1059014376]: [30.000279884s] [30.000279884s] END * E0520 04:26:26.725437 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: G[etI NhtFtO]p sSIG:T/E/R1M0:. 9S6h.u0t.t1i:n4g43/ap id/vw1n/ nsameersvpearcs theesn? ltiemrimti=n5a0t0i&nrge * sourceVe[rIsNiFoOn]= 0p:l udgiianl/ htecapl 10.t9h6: G0o.i1ng :in4t4o3 l:a mieduc/ko mode ifmor e5osu * t * I0520 04:26:26.726596 1 trace.go:116] Trace[1990689002]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:25:56.726202554 +0000 UTC m=+713.028304518) (total time: 30.000359243s): * Trace[1990689002]: [30.000359243s] [30.000359243s] END * E0520 04:26:26.726618 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.724972 1 trace.go:116] Trace[2050729718]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:27.724311834 +0000 UTC m=+744.026413989) (total time: 30.000604881s): * Trace[2050729718]: [30.000604881s] [30.000604881s] END * E0520 04:26:57.725010 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.726009 1 trace.go:116] Trace[747225447]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:27.725528178 +0000 UTC m=+744.027630163) (total time: 30.000442636s): * Trace[747225447]: [30.000442636s] [30.000442636s] END * E0520 04:26:57.726033 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:26:57.727247 1 trace.go:116] Trace[1483565094]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:27.726758719 +0000 UTC m=+744.028860683) (total time: 30.000432117s): * Trace[1483565094]: [30.000432117s] [30.000432117s] END * E0520 04:26:57.727284 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.725551 1 trace.go:116] Trace[1526661577]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.725120451 +0000 UTC m=+775.027222445) (total time: 30.000395564s): * Trace[1526661577]: [30.000395564s] [30.000395564s] END * E0520 04:27:28.725577 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.726658 1 trace.go:116] Trace[1210707463]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.726189358 +0000 UTC m=+775.028291412) (total time: 30.000432824s): * Trace[1210707463]: [30.000432824s] [30.000432824s] END * E0520 04:27:28.726683 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:28.727784 1 trace.go:116] Trace[1394767996]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:26:58.727432892 +0000 UTC m=+775.029534957) (total time: 30.000325652s): * Trace[1394767996]: [30.000325652s] [30.000325652s] END * E0520 04:27:28.727800 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:27:52.281968 1 trace.go:116] Trace[1623118623]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.727901365 +0000 UTC m=+806.030003350) (total time: 22.553997544s): * Trace[1623118623]: [22.553997544s] [22.553997544s] END * I0520 04:27:52.281968 1 trace.go:116] Trace[1184906420]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.726797423 +0000 UTC m=+806.028899467) (total time: 22.555100144s): * Trace[1184906420]: [22.555100144s] [22.555100144s] END * I0520 04:27:52.281976 1 trace.go:116] Trace[867160953]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:27:29.725697057 +0000 UTC m=+806.027799111) (total time: 22.556204338s): * Trace[867160953]: [22.556204338s] [22.556204338s] END * * ==> coredns [f0f301082962] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" * I0520 04:30:36.380123 1 trace.go:116] Trace[1106410694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:06.379674213 +0000 UTC m=+93.083679165) (total time: 30.000401799s): * Trace[1106410694]: [30.000401799s] [30.000401799s] END * E0520 04:30:36.380150 1 reflector.g[o:1I53N] FpOk]g /mpoldu/kg8isn.io/client-go@v0.17.2/tools/cache/reflector.go:105: Faile/dr teoa dliys:t *Svt1i.Namewsapictei: nGget ohntt:p s"://u1b0e.r96n.e0t.e1s:4"4 * 3/api/v1/names[pacNeFs?Ol]i miptl=u50g0i&nr/esroeuardceyV:e rsSitoinl=l0 : dial tcp 10.96.0.1:443: i/o wtiameiotuti * ng Io05n2:0 0"4k:3u0b:36.e3r8n1e0t6e3s " * 1 tIrNaFcOe]. gpol:u1g1i6n]/ rTeraadcye:[ 1S7t4i7l2l7 8w5a1i1t]i:n g" Roenfl: e"cktuobre rLniestteAsn"d * Watch["I NnFaOm]e :pplkugg/imno/dr/eka8dsy.:i oS/tcillile nwta-igtoi@nvg0 .o1n7:. 2"/ktuoboelrsn/ectaecsh"e * /reflector.[IgNoF:O1]0 5p l(usgtianr/treeda:d y2:0 2S0t-i0l5l- 2w0a i0t4i:n3g0 :o0n6:. 3"8k0u6b68e r+n0e0t0e0s "U * TC m=I+N9F3O.]0 8p4l7u0g4i7n5/0r)e a(dtyo:t aSlt itlilm ew:a i3t0i.n0g0 0o3n3:7 0"9k8usb)e:r * neteTsr"a * ce[1[7I4N7F2O7]8 5p1l1u]g:i n[/3r0e.a0d0y0:3 3S7t0i9l8ls] w[a3i0t.i0n0g0 3o3n7:0 9"8ksu]b eErNnDe * tesE"0 * 520 0[4I:N3F0O:]3 6p.l3u8g1i0n8/0r e a d y : S1t irlelf lweacittoirn.gg oo:n1:5 3"]k upbkegr/nmeotde/sk"8 * s.io/IcNlFiOe]n tp-lguog@ivn0/.r1e7a.d2y/:t oSotlisl/lc awcahiet/irnegf loenc:t o"rk.ugboe:r1n0e5t:e sF"a * iled to [lIiNsFtO ]* vp1l.uSgeirnv/irceea:d yG:e tS thitltlp sw:a/i/t1i0n.g9 6o.n0:. 1":k4u4b3e/ranpeit/evs1"/ * servi[cINFO] psl?ulgiimni/tr=e5a0d0y&:r eSstoiulrlc ewVaeirtsiinogn =o0n:: d"ikaulb etrcnpe t1e0s."9 * 6.0[.I1N:F4O4]3 :p liu/goi nt/irmeeaoduyt: * StIi0l5l2 0w a0i4t:i3n0g: 3o6n.:3 8"2k1u0b0e r n e t e s "1 * trace.go:116] Trace[460128162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:06.381752976 +0000 UTC m=+93.085757918) (total time: 30.000313202s): * Trace[460128162]: [30.000313202s] [30.000313202s] END * E0520 04:30:36.382118 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:07.380800 1 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:37.380269676 +0000 UTC m=+124.084274628) (total time: 30.000488656s): * Trace[817455089]: [30.000488656s] [30.000488656s] END * E0520 04:31:07.380827 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:07.381638 1 trace.go:116] Trace[683024728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:37.381325338 +0000 UTC m=+124.085330230) (total time: 30.000289272s): * Trace[683024728]: [30.000289272s] [30.000289272s] END * E0520 04:31:07.381651 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:07.382873 1 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:30:37.382441073 +0000 UTC m=+124.086446015) (total time: 30.000388668s): * Trace[1006933274]: [30.000388668s] [30.000388668s] END * E0520 04:31:07.382894 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.381393 1 trace.go:116] Trace[607811211]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.380940766 +0000 UTC m=+155.084945728) (total time: 30.000408204s): * Trace[607811211]: [30.000408204s] [30.000408204s] END * E0520 04:31:38.381419 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.382400 1 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.382021245 +0000 UTC m=+155.086026187) (total time: 30.000335286s): * Trace[629431445]: [30.000335286s] [30.000335286s] END * E0520 04:31:38.382420 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:31:38.383431 1 trace.go:116] Trace[1458323237]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:08.383048463 +0000 UTC m=+155.087053456) (total time: 30.000357588s): * Trace[1458323237]: [30.000357588s] [30.000357588s] END * E0520 04:31:38.383448 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.382663 1 trace.go:116] Trace[469339106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:39.381546472 +0000 UTC m=+186.085551374) (total time: 30.000446456s): * Trace[469339106]: [30.000446456s] [30.000446456s] END * E0520 04:32:09.382968 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.383480 1 trace.go:116] Trace[436340495]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (star[ted:I 2N0F20O-]0 5p-l20g 04:31a:3d9y.:3825t6i8l46l +w0000t iUnTCg m=n+:1 8"6.k0u86e5r7n3e41t3e)s ("t * otal time: 3[0I.0N00F6O40]9 1p6sl)u:g * in/Trreacaed[y4:3 634049 w]a:i [30.g0 0o06n4:09 1"6ks]u b[e3r0n.e0006s4"0 * 916s] END * E0520 04:32:09.383500 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:09.383948 1 trace.go:116] Trace[774965466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:31:39.383609595 +0000 UTC m=+186.087614487) (total time: 30.00031133s): * Trace[774965466]: [30.00031133s] [30.00031133s] END * E0520 04:32:09.383964 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.383689 1 trace.go:116] Trace[1225511528]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.383214742 +0000 UTC m=+217.087219644) (total time: 30.000427545s): * Trace[1225511528]: [30.000427545s] [30.000427545s] END * E0520 04:32:40.383716 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.384560 1 trace.go:116] Trace[1852186258]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.384270488 +0000 UTC m=+217.088275360) (total time: 30.000261182s): * Trace[1852186258]: [30.000261182s] [30.000261182s] END * E0520 04:32:40.384579 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * I0520 04:32:40.385676 1 trace.go:116] Trace[629458047]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-20 04:32:10.38534512 +0000 UTC m=+217.089350052) (total time: 30.000304473s): * Trace[629458047]: [30.000304473s] [30.000304473s] END * E0520 04:32:40.385694 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_05_19T21_13_44_0700 * minikube.k8s.io/version=v1.10.1 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Wed, 20 May 2020 04:13:41 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Wed, 20 May 2020 04:32:52 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Wed, 20 May 2020 04:28:32 +0000 Wed, 20 May 2020 04:13:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Wed, 20 May 2020 04:28:32 +0000 Wed, 20 May 2020 04:13:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Wed, 20 May 2020 04:28:32 +0000 Wed, 20 May 2020 04:13:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Wed, 20 May 2020 04:28:32 +0000 Wed, 20 May 2020 04:13:51 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.17.0.2 * Hostname: minikube * Capacity: * cpu: 16 * ephemeral-storage: 51343840Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 16393384Ki * pods: 110 * Allocatable: * cpu: 16 * ephemeral-storage: 51343840Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 16393384Ki * pods: 110 * System Info: * Machine ID: 2183cb3268a14d0fa839dc2333ee221b * System UUID: c2ec69e9-6d0e-4292-8055-7e2e210db87b * Boot ID: d0fc81e9-16a4-41f7-bb18-2ac7b7bf04fd * Kernel Version: 5.6.12-arch1-1 * OS Image: Ubuntu 19.10 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.2 * Kubelet Version: v1.18.2 * Kube-Proxy Version: v1.18.2 * PodCIDR: 10.244.0.0/24 * PodCIDRs: 10.244.0.0/24 * Non-terminated Pods: (8 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-66bff467f8-2tsns 100m (0%) 0 (0%) 70Mi (0%) 170Mi (1%) 18m * kube-system coredns-66bff467f8-btbx2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (1%) 18m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m1s * kube-system kube-apiserver-minikube 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3m21s * kube-system kube-controller-manager-minikube 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19m * kube-system kube-proxy-c8zvd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m * kube-system kube-scheduler-minikube 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m47s * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 750m (4%) 0 (0%) * memory 140Mi (0%) 340Mi (2%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-1Gi 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal Starting 19m kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 19m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 19m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 19m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeNotReady 19m kubelet, minikube Node minikube status is now: NodeNotReady * Normal NodeAllocatableEnforced 19m kubelet, minikube Updated Node Allocatable limit across pods * Normal NodeReady 19m kubelet, minikube Node minikube status is now: NodeReady * Warning readOnlySysFS 18m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) * Normal Starting 18m kube-proxy, minikube Starting kube-proxy. * Normal Starting 4m31s kubelet, minikube Starting kubelet. * Normal NodeAllocatableEnforced 4m31s kubelet, minikube Updated Node Allocatable limit across pods * Normal NodeHasSufficientMemory 4m30s (x8 over 4m31s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 4m30s (x8 over 4m31s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 4m30s (x7 over 4m31s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Warning readOnlySysFS 4m23s kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) * Normal Starting 4m23s kube-proxy, minikube Starting kube-proxy. * * ==> dmesg <== * [May19 10:14] kauditd_printk_skb: 50 callbacks suppressed * [ +5.029674] kauditd_printk_skb: 506 callbacks suppressed * [ +9.355966] kauditd_printk_skb: 9 callbacks suppressed * [May19 10:15] kauditd_printk_skb: 64 callbacks suppressed * [May19 10:24] kauditd_printk_skb: 52 callbacks suppressed * [May19 10:56] kauditd_printk_skb: 13 callbacks suppressed * [May19 10:57] kauditd_printk_skb: 51 callbacks suppressed * [ +5.042473] kauditd_printk_skb: 508 callbacks suppressed * [ +8.694920] kauditd_printk_skb: 2 callbacks suppressed * [ +25.101936] kauditd_printk_skb: 64 callbacks suppressed * [May19 11:00] kauditd_printk_skb: 52 callbacks suppressed * [May19 11:03] kauditd_printk_skb: 20 callbacks suppressed * [May19 11:23] kauditd_printk_skb: 13 callbacks suppressed * [May19 11:25] kauditd_printk_skb: 51 callbacks suppressed * [ +5.018889] kauditd_printk_skb: 512 callbacks suppressed * [ +8.652260] kauditd_printk_skb: 2 callbacks suppressed * [May19 11:26] kauditd_printk_skb: 64 callbacks suppressed * [ +33.367242] kauditd_printk_skb: 17 callbacks suppressed * [May19 11:29] kauditd_printk_skb: 198 callbacks suppressed * [May19 11:35] kauditd_printk_skb: 52 callbacks suppressed * [May19 11:36] kauditd_printk_skb: 2 callbacks suppressed * [May19 11:40] kauditd_printk_skb: 14 callbacks suppressed * [ +5.156807] kauditd_printk_skb: 6 callbacks suppressed * [ +16.006989] kauditd_printk_skb: 15 callbacks suppressed * [ +5.032856] kauditd_printk_skb: 153 callbacks suppressed * [ +5.214761] kauditd_printk_skb: 419 callbacks suppressed * [ +8.938520] kauditd_printk_skb: 39 callbacks suppressed * [May19 11:41] kauditd_printk_skb: 23 callbacks suppressed * * ==> etcd [df4677503ce2] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-05-20 04:28:26.971113 I | etcdmain: etcd Version: 3.4.3 * 2020-05-20 04:28:26.971160 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-05-20 04:28:26.971165 I | etcdmain: Go Version: go1.12.12 * 2020-05-20 04:28:26.971177 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-05-20 04:28:26.971183 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16 * 2020-05-20 04:28:26.971238 N | etcdmain: the server is already initialized as member before, starting as etcd member... * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-05-20 04:28:26.971286 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-05-20 04:28:26.972035 I | embed: name = minikube * 2020-05-20 04:28:26.972047 I | embed: data dir = /var/lib/minikube/etcd * 2020-05-20 04:28:26.972051 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-05-20 04:28:26.972055 I | embed: heartbeat = 100ms * 2020-05-20 04:28:26.972059 I | embed: election = 1000ms * 2020-05-20 04:28:26.972063 I | embed: snapshot count = 10000 * 2020-05-20 04:28:26.972071 I | embed: advertise client URLs = https://172.17.0.2:2379 * 2020-05-20 04:28:26.972082 I | embed: initial advertise peer URLs = https://172.17.0.2:2380 * 2020-05-20 04:28:26.972089 I | embed: initial cluster = * 2020-05-20 04:28:26.988390 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 2867 * raft2020/05/20 04:28:26 INFO: b273bc7741bcb020 switched to configuration voters=() * raft2020/05/20 04:28:26 INFO: b273bc7741bcb020 became follower at term 2 * raft2020/05/20 04:28:26 INFO: newRaft b273bc7741bcb020 [peers: [], term: 2, commit: 2867, applied: 0, lastindex: 2867, lastterm: 2] * 2020-05-20 04:28:27.069431 I | mvcc: restore compact to 1270 * 2020-05-20 04:28:27.071614 W | auth: simple token is not cryptographically signed * 2020-05-20 04:28:27.073947 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * raft2020/05/20 04:28:27 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-05-20 04:28:27.074288 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 * 2020-05-20 04:28:27.074344 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-05-20 04:28:27.074372 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-05-20 04:28:27.075735 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-05-20 04:28:27.075797 I | embed: listening for peers on 172.17.0.2:2380 * 2020-05-20 04:28:27.075931 I | embed: listening for metrics on http://127.0.0.1:2381 * raft2020/05/20 04:28:28 INFO: b273bc7741bcb020 is starting a new election at term 2 * raft2020/05/20 04:28:28 INFO: b273bc7741bcb020 became candidate at term 3 * raft2020/05/20 04:28:28 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 3 * raft2020/05/20 04:28:28 INFO: b273bc7741bcb020 became leader at term 3 * raft2020/05/20 04:28:28 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 3 * 2020-05-20 04:28:28.690308 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 86482fea2286a1d2 * 2020-05-20 04:28:28.690333 I | embed: ready to serve client requests * 2020-05-20 04:28:28.690392 I | embed: ready to serve client requests * 2020-05-20 04:28:28.692215 I | embed: serving client requests on 127.0.0.1:2379 * 2020-05-20 04:28:28.692505 I | embed: serving client requests on 172.17.0.2:2379 * * ==> kernel <== * 04:32:56 up 2 days, 10:45, 0 users, load average: 1.13, 1.29, 1.13 * Linux minikube 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Ubuntu 19.10" * * ==> kube-apiserver [2772f112cd16] <== * I0520 04:28:29.729053 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0520 04:28:29.738124 1 client.go:361] parsed scheme: "endpoint" * I0520 04:28:29.738163 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * W0520 04:28:29.904430 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. * W0520 04:28:29.920303 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. * W0520 04:28:29.934592 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. * W0520 04:28:29.945812 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. * W0520 04:28:29.948247 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. * W0520 04:28:29.958359 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. * W0520 04:28:29.971706 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. * W0520 04:28:29.971729 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. * I0520 04:28:29.978359 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0520 04:28:29.978381 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0520 04:28:29.979624 1 client.go:361] parsed scheme: "endpoint" * I0520 04:28:29.979645 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0520 04:28:29.989936 1 client.go:361] parsed scheme: "endpoint" * I0520 04:28:29.989958 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0520 04:28:31.982235 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0520 04:28:31.982237 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0520 04:28:31.982453 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0520 04:28:31.982751 1 secure_serving.go:178] Serving securely on [::]:8443 * I0520 04:28:31.982846 1 autoregister_controller.go:141] Starting autoregister controller * I0520 04:28:31.982866 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0520 04:28:31.982905 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0520 04:28:31.983023 1 controller.go:81] Starting OpenAPI AggregationController * I0520 04:28:31.983143 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0520 04:28:31.983173 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * I0520 04:28:31.983346 1 available_controller.go:387] Starting AvailableConditionController * I0520 04:28:31.983432 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0520 04:28:31.983355 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0520 04:28:31.983506 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0520 04:28:31.983755 1 crd_finalizer.go:266] Starting CRDFinalizer * I0520 04:28:31.983777 1 controller.go:86] Starting OpenAPI controller * I0520 04:28:31.983792 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0520 04:28:31.983807 1 naming_controller.go:291] Starting NamingConditionController * I0520 04:28:31.983818 1 establishing_controller.go:76] Starting EstablishingController * I0520 04:28:31.983861 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0520 04:28:31.983891 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0520 04:28:31.984375 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0520 04:28:31.984397 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * I0520 04:28:31.984464 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0520 04:28:31.984523 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * E0520 04:28:31.985131 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: * I0520 04:28:32.073031 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0520 04:28:32.082987 1 cache.go:39] Caches are synced for autoregister controller * I0520 04:28:32.083395 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0520 04:28:32.083501 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0520 04:28:32.083612 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0520 04:28:32.168469 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0520 04:28:32.982315 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0520 04:28:32.982520 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0520 04:28:32.987343 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * W0520 04:28:33.385907 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2] * I0520 04:28:33.386930 1 controller.go:606] quota admission added evaluator for: endpoints * I0520 04:28:33.391898 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0520 04:28:33.683477 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0520 04:28:33.698431 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0520 04:28:33.784862 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0520 04:28:33.797383 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0520 04:28:33.802634 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * * ==> kube-controller-manager [19bb30358052] <== * I0520 04:14:37.921749 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"a5fc46dd-dc4b-4116-8c3f-12abb682fedb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0520 04:14:37.968315 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-696dbcc666" failed with pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:37.968616 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-696dbcc666", UID:"81fd008a-e7dd-46f4-9ff9-9e910415e6ee", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:37.976965 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"a5fc46dd-dc4b-4116-8c3f-12abb682fedb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0520 04:14:37.977003 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0520 04:14:37.979041 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-696dbcc666" failed with pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:37.979037 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-696dbcc666", UID:"81fd008a-e7dd-46f4-9ff9-9e910415e6ee", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0520 04:14:37.981922 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-84bfdf55ff" failed with pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:37.981915 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"a5fc46dd-dc4b-4116-8c3f-12abb682fedb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-84bfdf55ff-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:37.983197 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-696dbcc666", UID:"81fd008a-e7dd-46f4-9ff9-9e910415e6ee", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * E0520 04:14:37.983228 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-696dbcc666" failed with pods "kubernetes-dashboard-696dbcc666-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found * I0520 04:14:38.000009 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-84bfdf55ff", UID:"a5fc46dd-dc4b-4116-8c3f-12abb682fedb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-84bfdf55ff-mcx54 * I0520 04:14:38.007374 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-696dbcc666", UID:"81fd008a-e7dd-46f4-9ff9-9e910415e6ee", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-696dbcc666-r55l4 * I0520 04:23:18.675906 1 namespace_controller.go:185] Namespace has been deleted kubernetes-dashboard * E0520 04:27:52.280956 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2028&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.280993 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=1868&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.280997 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=1964&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281044 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281053 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=384&timeout=6m37s&timeoutSeconds=397&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281070 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1938&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281092 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.IngressClass: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1/ingressclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281094 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://control-plane.minikube.internal:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281128 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://control-plane.minikube.internal:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=2067&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281176 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281201 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=2066&timeout=7m46s&timeoutSeconds=466&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281217 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=1891&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281237 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://control-plane.minikube.internal:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m3s&timeoutSeconds=423&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281264 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1890&timeout=5m59s&timeoutSeconds=359&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281281 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=7m46s&timeoutSeconds=466&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281286 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=1874&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281309 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281318 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=43&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281332 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281362 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=2555&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281385 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281389 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m26s&timeoutSeconds=386&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281409 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281427 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://control-plane.minikube.internal:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=370&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281452 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281467 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1883&timeout=9m48s&timeoutSeconds=588&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281468 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m58s&timeoutSeconds=538&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281484 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281491 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://control-plane.minikube.internal:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=2068&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281500 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281527 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=2556&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281527 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=9&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281543 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=2594&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281556 1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=1&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281547 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281576 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=1873&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281582 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281593 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://control-plane.minikube.internal:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=2051&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281702 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281707 1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://control-plane.minikube.internal:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=40&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281719 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=2595&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281722 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281726 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281786 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=6m7s&timeoutSeconds=367&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281865 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=430&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.282109 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=1870&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * * ==> kube-controller-manager [3ee154244bf1] <== * I0520 04:28:51.132535 1 controllermanager.go:533] Started "horizontalpodautoscaling" * I0520 04:28:51.132620 1 horizontal.go:169] Starting HPA controller * I0520 04:28:51.132642 1 shared_informer.go:223] Waiting for caches to sync for HPA * I0520 04:28:51.431904 1 controllermanager.go:533] Started "disruption" * I0520 04:28:51.431982 1 disruption.go:331] Starting disruption controller * I0520 04:28:51.431997 1 shared_informer.go:223] Waiting for caches to sync for disruption * I0520 04:28:51.581700 1 controllermanager.go:533] Started "bootstrapsigner" * W0520 04:28:51.581729 1 controllermanager.go:525] Skipping "root-ca-cert-publisher" * I0520 04:28:51.581783 1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer * I0520 04:28:51.732939 1 controllermanager.go:533] Started "deployment" * I0520 04:28:51.732980 1 deployment_controller.go:153] Starting deployment controller * I0520 04:28:51.732994 1 shared_informer.go:223] Waiting for caches to sync for deployment * I0520 04:28:51.882087 1 node_ipam_controller.go:94] Sending events to api server. * I0520 04:29:01.885568 1 range_allocator.go:82] Sending events to api server. * I0520 04:29:01.885673 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. * I0520 04:29:01.885714 1 controllermanager.go:533] Started "nodeipam" * I0520 04:29:01.886236 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0520 04:29:01.888105 1 node_ipam_controller.go:162] Starting ipam controller * I0520 04:29:01.888133 1 shared_informer.go:223] Waiting for caches to sync for node * I0520 04:29:01.889706 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0520 04:29:01.890395 1 shared_informer.go:230] Caches are synced for namespace * W0520 04:29:01.892666 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0520 04:29:01.896674 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0520 04:29:01.932348 1 shared_informer.go:230] Caches are synced for TTL * I0520 04:29:01.968007 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0520 04:29:01.981939 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0520 04:29:01.982812 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0520 04:29:01.982960 1 shared_informer.go:230] Caches are synced for service account * I0520 04:29:01.988270 1 shared_informer.go:230] Caches are synced for node * I0520 04:29:01.988304 1 range_allocator.go:172] Starting range CIDR allocator * I0520 04:29:01.988311 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator * I0520 04:29:01.988320 1 shared_informer.go:230] Caches are synced for cidrallocator * I0520 04:29:02.003608 1 shared_informer.go:230] Caches are synced for PV protection * I0520 04:29:02.132771 1 shared_informer.go:230] Caches are synced for expand * I0520 04:29:02.489960 1 shared_informer.go:230] Caches are synced for garbage collector * I0520 04:29:02.496421 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0520 04:29:02.510434 1 shared_informer.go:230] Caches are synced for stateful set * I0520 04:29:02.532130 1 shared_informer.go:230] Caches are synced for disruption * I0520 04:29:02.532150 1 disruption.go:339] Sending events to api server. * I0520 04:29:02.532311 1 shared_informer.go:230] Caches are synced for taint * I0520 04:29:02.532420 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * I0520 04:29:02.532445 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0520 04:29:02.532503 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"1dcd3f05-84a6-4c95-bdaf-bc74b8e7c46c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * W0520 04:29:02.532550 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0520 04:29:02.532614 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. * I0520 04:29:02.532618 1 shared_informer.go:230] Caches are synced for GC * I0520 04:29:02.532683 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0520 04:29:02.532710 1 shared_informer.go:230] Caches are synced for attach detach * I0520 04:29:02.532811 1 shared_informer.go:230] Caches are synced for HPA * I0520 04:29:02.533122 1 shared_informer.go:230] Caches are synced for deployment * I0520 04:29:02.546317 1 shared_informer.go:230] Caches are synced for endpoint * I0520 04:29:02.569420 1 shared_informer.go:230] Caches are synced for job * I0520 04:29:02.576697 1 shared_informer.go:230] Caches are synced for persistent volume * I0520 04:29:02.579805 1 shared_informer.go:230] Caches are synced for resource quota * I0520 04:29:02.582890 1 shared_informer.go:230] Caches are synced for ReplicationController * I0520 04:29:02.583063 1 shared_informer.go:230] Caches are synced for PVC protection * I0520 04:29:02.586411 1 shared_informer.go:230] Caches are synced for resource quota * I0520 04:29:02.587967 1 shared_informer.go:230] Caches are synced for garbage collector * I0520 04:29:02.587986 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0520 04:29:02.588753 1 shared_informer.go:230] Caches are synced for daemon sets * * ==> kube-proxy [8409b7473f0c] <== * E0520 04:23:03.506755 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:23:03.506773 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:23:33.401954 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:23:33.498619 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:23:33.498649 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:24:03.402292 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:24:03.504992 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:24:03.505021 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:24:33.402226 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:24:33.511279 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:24:33.511311 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:25:03.402047 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:25:03.517190 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:25:03.517225 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:25:33.401727 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:25:33.522402 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:25:33.522429 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:26:03.400291 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:26:03.529606 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:26:03.529636 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:26:33.402260 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:26:33.535049 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:26:33.535083 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:27:03.401672 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:27:03.538958 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:27:03.538976 1 proxier.go:825] Sync failed; retrying in 30s * W0520 04:27:33.402645 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:27:33.545132 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:27:33.545165 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:27:52.281191 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2594&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * E0520 04:27:52.281579 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1883&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * * ==> kube-proxy [bbf03c75c53e] <== * E0520 04:30:28.787323 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:30:33.787522 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * W0520 04:30:33.788469 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:30:33.908037 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:30:33.908077 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:30:38.787773 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:30:43.788099 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:30:48.788285 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:30:53.788523 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:30:58.788764 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:03.789019 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * W0520 04:31:03.789737 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:31:03.913814 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:31:03.913839 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:31:08.789267 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:13.789514 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:18.789744 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:23.789954 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:28.790111 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * W0520 04:31:33.788994 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:31:33.790278 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:33.919671 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:31:33.919708 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:31:38.790533 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:43.790800 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:48.791086 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:53.791297 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:31:58.791486 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * W0520 04:32:03.789710 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:32:03.791715 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:03.926573 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:32:03.926615 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:32:08.791907 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:13.792137 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:18.792339 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:23.792544 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:28.792855 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * W0520 04:32:33.790003 1 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * Perhaps iptables or your kernel needs to be upgraded. * E0520 04:32:33.793092 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:33.932893 1 proxier.go:841] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error checking rule: exit status 2: iptables v1.8.3 (legacy): Couldn't load match `comment':No such file or directory * * Try `iptables -h' or 'iptables --help' for more information. * I0520 04:32:33.932927 1 proxier.go:825] Sync failed; retrying in 30s * E0520 04:32:38.793312 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:43.793535 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:48.793773 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * E0520 04:32:53.793983 1 server.go:621] starting metrics server failed: listen tcp 172.17.0.3:10249: bind: cannot assign requested address * * ==> kube-scheduler [24f29814e3e8] <== * I0520 04:28:27.085381 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:28:27.085441 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:28:27.991339 1 serving.go:313] Generated self-signed cert in-memory * I0520 04:28:32.275557 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:28:32.275588 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0520 04:28:32.277401 1 authorization.go:47] Authorization is disabled * W0520 04:28:32.277429 1 authentication.go:40] Authentication is disabled * I0520 04:28:32.277449 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0520 04:28:32.278894 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0520 04:28:32.278905 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:28:32.278926 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0520 04:28:32.278927 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:28:32.279394 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0520 04:28:32.279806 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0520 04:28:32.379126 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file * I0520 04:28:32.379126 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:28:32.379579 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0520 04:28:48.208246 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * * ==> kube-scheduler [53cf51b59b66] <== * I0520 04:13:36.576298 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:13:36.576360 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:13:37.125398 1 serving.go:313] Generated self-signed cert in-memory * W0520 04:13:41.278871 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0520 04:13:41.278903 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0520 04:13:41.278916 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0520 04:13:41.278924 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0520 04:13:41.387901 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0520 04:13:41.387918 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0520 04:13:41.470626 1 authorization.go:47] Authorization is disabled * W0520 04:13:41.470646 1 authentication.go:40] Authentication is disabled * I0520 04:13:41.470659 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0520 04:13:41.475807 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:13:41.475826 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:13:41.476865 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0520 04:13:41.477454 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0520 04:13:41.479251 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0520 04:13:41.568327 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0520 04:13:41.568330 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0520 04:13:41.568665 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0520 04:13:41.568327 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0520 04:13:41.568924 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0520 04:13:41.568993 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0520 04:13:41.569687 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0520 04:13:41.570028 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0520 04:13:41.570241 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0520 04:13:41.570327 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0520 04:13:41.572707 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0520 04:13:41.573181 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0520 04:13:41.574590 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0520 04:13:41.576127 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0520 04:13:41.577009 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0520 04:13:41.578289 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0520 04:13:41.581112 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * I0520 04:13:43.676015 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0520 04:13:44.577090 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler... * I0520 04:13:44.582188 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler * E0520 04:27:52.282417 1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=1870&timeout=9m13s&timeoutSeconds=553&watch=true: dial tcp 172.17.0.3:8443: connect: connection refused * * ==> kubelet <== * -- Logs begin at Wed 2020-05-20 04:28:14 UTC, end at Wed 2020-05-20 04:32:56 UTC. -- * May 20 04:28:34 minikube kubelet[531]: E0520 04:28:34.580986 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:34 minikube kubelet[531]: E0520 04:28:34.681134 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:34 minikube kubelet[531]: E0520 04:28:34.781274 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:34 minikube kubelet[531]: E0520 04:28:34.881400 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:34 minikube kubelet[531]: E0520 04:28:34.981531 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:35 minikube kubelet[531]: E0520 04:28:35.081701 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:35 minikube kubelet[531]: E0520 04:28:35.181828 531 kubelet.go:2267] node "minikube" not found * May 20 04:28:35 minikube kubelet[531]: E0520 04:28:35.930459 531 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * May 20 04:28:35 minikube kubelet[531]: E0520 04:28:35.930516 531 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * May 20 04:28:45 minikube kubelet[531]: E0520 04:28:45.942197 531 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * May 20 04:28:45 minikube kubelet[531]: E0520 04:28:45.942250 531 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * May 20 04:28:55 minikube kubelet[531]: E0520 04:28:55.957795 531 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * May 20 04:28:55 minikube kubelet[531]: E0520 04:28:55.957843 531 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * May 20 04:29:04 minikube kubelet[531]: I0520 04:29:04.747119 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b37c264da9c852824754bdb4603d21366de03c6d1a5c47022d80fbde6e0e2fd * May 20 04:29:04 minikube kubelet[531]: I0520 04:29:04.747566 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666 * May 20 04:29:04 minikube kubelet[531]: E0520 04:29:04.747958 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:29:05 minikube kubelet[531]: E0520 04:29:05.970117 531 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * May 20 04:29:05 minikube kubelet[531]: E0520 04:29:05.970158 531 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * May 20 04:29:15 minikube kubelet[531]: E0520 04:29:15.981597 531 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" * May 20 04:29:15 minikube kubelet[531]: E0520 04:29:15.981646 531 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics * May 20 04:29:17 minikube kubelet[531]: I0520 04:29:17.729038 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666 * May 20 04:29:25 minikube kubelet[531]: W0520 04:29:25.737101 531 iptables.go:550] Could not set up iptables canary mangle/KUBE-KUBELET-CANARY: error creating chain "KUBE-KUBELET-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * May 20 04:29:25 minikube kubelet[531]: Perhaps iptables or your kernel needs to be upgraded. * May 20 04:29:25 minikube kubelet[531]: I0520 04:29:25.779895 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1e718b360e55547754973355392835b9eb2072ca9afc191e9846fb1380846341 * May 20 04:29:25 minikube kubelet[531]: I0520 04:29:25.803054 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6a5feeeee82342a13d253a3a5e1d23601830e62a5ba1a222bd69fc4ee6fc3f93 * May 20 04:29:48 minikube kubelet[531]: I0520 04:29:48.194455 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e43f5d943ed9c2df91e7be80ada9edf25576944c4f4677ed184a985fdcbab666 * May 20 04:29:48 minikube kubelet[531]: I0520 04:29:48.194838 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 * May 20 04:29:48 minikube kubelet[531]: E0520 04:29:48.195193 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:30:02 minikube kubelet[531]: I0520 04:30:02.729045 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 * May 20 04:30:02 minikube kubelet[531]: E0520 04:30:02.729442 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:30:13 minikube kubelet[531]: I0520 04:30:13.729233 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 * May 20 04:30:25 minikube kubelet[531]: W0520 04:30:25.736836 531 iptables.go:550] Could not set up iptables canary mangle/KUBE-KUBELET-CANARY: error creating chain "KUBE-KUBELET-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * May 20 04:30:25 minikube kubelet[531]: Perhaps iptables or your kernel needs to be upgraded. * May 20 04:30:29 minikube kubelet[531]: E0520 04:30:29.083822 531 kubelet.go:1681] Unable to attach or mount volumes for pod "etcd-minikube_kube-system(a310918f011bf57663b061814bf39847)": unmounted volumes=[etcd-data etcd-certs], unattached volumes=[etcd-data etcd-certs]: timed out waiting for the condition; skipping pod * May 20 04:30:29 minikube kubelet[531]: E0520 04:30:29.083889 531 pod_workers.go:191] Error syncing pod a310918f011bf57663b061814bf39847 ("etcd-minikube_kube-system(a310918f011bf57663b061814bf39847)"), skipping: unmounted volumes=[etcd-data etcd-certs], unattached volumes=[etcd-data etcd-certs]: timed out waiting for the condition * May 20 04:30:29 minikube kubelet[531]: E0520 04:30:29.087300 531 kubelet.go:1681] Unable to attach or mount volumes for pod "kube-apiserver-minikube_kube-system(5a07cf55283d733e973795315848df03)": unmounted volumes=[ca-certs etc-ca-certificates k8s-certs usr-local-share-ca-certificates usr-share-ca-certificates], unattached volumes=[ca-certs etc-ca-certificates k8s-certs usr-local-share-ca-certificates usr-share-ca-certificates]: timed out waiting for the condition; skipping pod * May 20 04:30:29 minikube kubelet[531]: E0520 04:30:29.087339 531 pod_workers.go:191] Error syncing pod 5a07cf55283d733e973795315848df03 ("kube-apiserver-minikube_kube-system(5a07cf55283d733e973795315848df03)"), skipping: unmounted volumes=[ca-certs etc-ca-certificates k8s-certs usr-local-share-ca-certificates usr-share-ca-certificates], unattached volumes=[ca-certs etc-ca-certificates k8s-certs usr-local-share-ca-certificates usr-share-ca-certificates]: timed out waiting for the condition * May 20 04:30:44 minikube kubelet[531]: I0520 04:30:44.714098 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ee329db8593a6ae952c9f76f6b10a396c0b1c7f064872c261e049513670a7269 * May 20 04:30:44 minikube kubelet[531]: I0520 04:30:44.714591 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:30:44 minikube kubelet[531]: E0520 04:30:44.715019 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:30:55 minikube kubelet[531]: I0520 04:30:55.728811 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:30:55 minikube kubelet[531]: E0520 04:30:55.729042 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:31:10 minikube kubelet[531]: I0520 04:31:10.728904 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:31:10 minikube kubelet[531]: E0520 04:31:10.729232 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:31:24 minikube kubelet[531]: I0520 04:31:24.729022 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:31:25 minikube kubelet[531]: W0520 04:31:25.737408 531 iptables.go:550] Could not set up iptables canary mangle/KUBE-KUBELET-CANARY: error creating chain "KUBE-KUBELET-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * May 20 04:31:25 minikube kubelet[531]: Perhaps iptables or your kernel needs to be upgraded. * May 20 04:31:55 minikube kubelet[531]: I0520 04:31:55.377891 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 98bd8d9071202a0434cddaa9a09a8a5c9cadb963bef15830ec4911a579f84d00 * May 20 04:31:55 minikube kubelet[531]: I0520 04:31:55.378294 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:31:55 minikube kubelet[531]: E0520 04:31:55.378650 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:32:08 minikube kubelet[531]: I0520 04:32:08.728952 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:32:08 minikube kubelet[531]: E0520 04:32:08.729321 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:32:23 minikube kubelet[531]: I0520 04:32:23.729131 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:32:23 minikube kubelet[531]: E0520 04:32:23.729497 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:32:25 minikube kubelet[531]: W0520 04:32:25.737249 531 iptables.go:550] Could not set up iptables canary mangle/KUBE-KUBELET-CANARY: error creating chain "KUBE-KUBELET-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?) * May 20 04:32:25 minikube kubelet[531]: Perhaps iptables or your kernel needs to be upgraded. * May 20 04:32:35 minikube kubelet[531]: I0520 04:32:35.729074 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:32:35 minikube kubelet[531]: E0520 04:32:35.729314 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * May 20 04:32:47 minikube kubelet[531]: I0520 04:32:47.729101 531 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7d010a3bfea3f21600fbb1ed5be21ee3a27a55bee941cecaac01c26900b64efb * May 20 04:32:47 minikube kubelet[531]: E0520 04:32:47.729492 531 pod_workers.go:191] Error syncing pod 1badcdfe-ac4e-4368-b40e-dabdd4b22615 ("storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1badcdfe-ac4e-4368-b40e-dabdd4b22615)" * * ==> storage-provisioner [7d010a3bfea3] <== * F0520 04:31:55.008055 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
afbjorklund commented 4 years ago

Seems like there is some kind of problem with iptables, on the host ?

arroadie commented 4 years ago

Hey Anders, thanks for looking at the issue. As a side record, I installed virtualbox and was able to run minikube on that driver (because then, obviously, the networking would be solved by vbox).

Do you have any hints on how I can debug this issue? Thanks

afbjorklund commented 4 years ago

I think we need to copy some of the setup, from the "none" driver

Like #7905

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#letting-iptables-see-bridged-traffic

The question is how docker can work properly on the host, without it.

arroadie commented 4 years ago

Oh, that makes complete sense. Minikube uses the bridged network from docker, right? In that case it would be needed to have the proper setup with the expectation of traffic on that network. Will investigate, debug and when I can make it work try to write the steps to have it properly working. Thanks again!

arroadie commented 4 years ago

Ok, reconfigured firewalld and it worked as expected. I'll take a look into improving the docs for that case. Thanks for the help @afbjorklund