kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

TestVersionUpgrade: bind: address already in use #7171

Closed medyagh closed 4 years ago

medyagh commented 4 years ago

as seen in https://storage.googleapis.com/minikube-builds/logs/7165/ba4e134/Docker_Linux.html#fail_TestVersionUpgrade

=== RUN   TestVersionUpgrade
=== PAUSE TestVersionUpgrade
=== CONT  TestVersionUpgrade
--- FAIL: TestVersionUpgrade (322.21s)
version_upgrade_test.go:74: (dbg) Run:  /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:74: (dbg) Done: /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker : (2m20.700673861s)
version_upgrade_test.go:83: (dbg) Run:  /tmp/minikube-release.134122834.exe stop -p vupgrade-20200323T124430.430384955-31589
version_upgrade_test.go:83: (dbg) Done: /tmp/minikube-release.134122834.exe stop -p vupgrade-20200323T124430.430384955-31589: (12.272399654s)
version_upgrade_test.go:88: (dbg) Run:  /tmp/minikube-release.134122834.exe -p vupgrade-20200323T124430.430384955-31589 status --format={{.Host}}
version_upgrade_test.go:88: (dbg) Non-zero exit: /tmp/minikube-release.134122834.exe -p vupgrade-20200323T124430.430384955-31589 status --format={{.Host}}: exit status 7 (373.022401ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:90: status error: exit status 7 (may be ok)
version_upgrade_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.18.0-rc.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.18.0-rc.1 --alsologtostderr -v=1 --driver=docker : exit status 70 (1m33.622651638s)
-- stdout --
* [vupgrade-20200323T124430.430384955-31589] minikube v1.9.0-beta.2 on Debian 9.12
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube
- MINIKUBE_LOCATION=7165
* Using the docker driver based on existing profile
* Reconfiguring existing host ...
* Starting existing docker container for "vupgrade-20200323T124430.430384955-31589" ...
* Preparing Kubernetes v1.18.0-rc.1 on Docker 19.03.2 ...
- kubeadm.pod-network-cidr=10.244.0.0/16
* Problems detected in kubelet:
- Mar 23 19:48:13 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:13.822822    1175 pod_workers.go:191] Error syncing pod 880e7571d217187114090f82749b08a9 ("kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"
- Mar 23 19:48:15 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:15.720574    1175 pod_workers.go:191] Error syncing pod 880e7571d217187114090f82749b08a9 ("kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"
- Mar 23 19:48:16 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:16.553076    1175 pod_workers.go:191] Error syncing pod 08bc5143665e3d99033b8cf7b7179e9c ("kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"
* Problems detected in kube-apiserver [b9b058b22d02]:
- Error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
* Problems detected in kube-controller-manager [4b01d60cbde5]:
- failed to create listener: failed to listen on 0.0.0.0:10252: listen tcp 0.0.0.0:10252: bind: address already in use
* Problems detected in kube-scheduler [1fcb859c177a]:
- failed to create listener: failed to listen on 0.0.0.0:10251: listen tcp 0.0.0.0:10251: bind: address already in use
* Problems detected in etcd [1b67cc7bedb6]:
- 2020-03-23 19:48:15.585390 C | etcdmain: listen tcp 127.0.0.1:2379: bind: address already in use
-- /stdout --
** stderr ** 
I0323 12:47:04.334766    9936 notify.go:125] Checking for updates...
I0323 12:47:04.607774    9936 start.go:254] hostinfo: {"hostname":"podman-integration-slave9","uptime":16182,"bootTime":1584976642,"procs":307,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
I0323 12:47:04.608515    9936 start.go:264] virtualization: kvm host
I0323 12:47:04.609290    9936 driver.go:226] Setting default libvirt URI to qemu:///system
I0323 12:47:04.766561    9936 start.go:302] selected driver: docker
I0323 12:47:04.766576    9936 start.go:578] validating driver "docker" against &{Name:vupgrade-20200323T124430.430384955-31589 KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.11.10 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:
10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.11.10 ControlPlane:true Worker:true}] Addons:map[storage-provisioner:true]}
I0323 12:47:04.766681    9936 start.go:584] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0323 12:47:04.766701    9936 start.go:1018] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0323 12:47:04.969524    9936 start.go:923] Using suggested 3700MB memory alloc based on sys=15043MB, container=15043MB
I0323 12:47:04.969706    9936 cache.go:103] Beginning downloading kic artifacts
I0323 12:47:04.969743    9936 preload.go:86] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-rc.1-docker-overlay2-amd64.tar.lz4
I0323 12:47:04.969753    9936 cache.go:46] Caching tarball of preloaded images
I0323 12:47:04.969772    9936 preload.go:115] Found /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0323 12:47:04.969782    9936 cache.go:49] Finished downloading the preloaded tar for v1.18.0-rc.1 on docker
I0323 12:47:04.969887    9936 profile.go:138] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/config.json ...
I0323 12:47:04.970170    9936 cache.go:105] Downloading gcr.io/k8s-minikube/kicbase:v0.0.7@sha256:a6f288de0e5863cdeab711fa6bafa38ee7d8d285ca14216ecf84fcfb07c7d176 to local daemon
I0323 12:47:04.970191    9936 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.7@sha256:a6f288de0e5863cdeab711fa6bafa38ee7d8d285ca14216ecf84fcfb07c7d176 to local daemon
I0323 12:47:05.481806    9936 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.7@sha256:a6f288de0e5863cdeab711fa6bafa38ee7d8d285ca14216ecf84fcfb07c7d176 in local docker daemon, skipping pull
I0323 12:47:05.481846    9936 cache.go:116] Successfully downloaded all kic artifacts
I0323 12:47:05.481886    9936 start.go:260] acquiring machines lock for vupgrade-20200323T124430.430384955-31589: {Name:mkf3edeb1d1ce9e252610362839e498ce152cddb Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0323 12:47:05.482095    9936 start.go:264] acquired machines lock for "vupgrade-20200323T124430.430384955-31589" in 171.672µs
I0323 12:47:05.482127    9936 start.go:90] Skipping create...Using existing machine configuration
I0323 12:47:05.482171    9936 fix.go:60] fixHost starting: m01
I0323 12:47:06.335972    9936 machine.go:86] provisioning docker machine ...
I0323 12:47:06.336015    9936 ubuntu.go:163] provisioning hostname "vupgrade-20200323T124430.430384955-31589"
I0323 12:47:06.417699    9936 main.go:110] libmachine: Using SSH client type: native
I0323 12:47:06.417935    9936 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
I0323 12:47:06.417957    9936 main.go:110] libmachine: About to run SSH command:
sudo hostname vupgrade-20200323T124430.430384955-31589 && echo "vupgrade-20200323T124430.430384955-31589" | sudo tee /etc/hostname
I0323 12:47:06.418786    9936 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38746->127.0.0.1:32941: read: connection reset by peer
I0323 12:47:09.600253    9936 main.go:110] libmachine: SSH cmd err, output: <nil>: vupgrade-20200323T124430.430384955-31589
I0323 12:47:09.689954    9936 main.go:110] libmachine: Using SSH client type: native
I0323 12:47:09.690201    9936 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
I0323 12:47:09.690249    9936 main.go:110] libmachine: About to run SSH command:
        if ! grep -xq '.*\svupgrade-20200323T124430.430384955-31589' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 vupgrade-20200323T124430.430384955-31589/g' /etc/hosts;
            else 
                echo '127.0.1.1 vupgrade-20200323T124430.430384955-31589' | sudo tee -a /etc/hosts; 
            fi
        fi
I0323 12:47:09.859904    9936 main.go:110] libmachine: SSH cmd err, output: <nil>: 127.0.1.1 vupgrade-20200323T124430.430384955-31589
I0323 12:47:09.859942    9936 ubuntu.go:169] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyR
emotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube}
I0323 12:47:09.859978    9936 ubuntu.go:171] setting up certificates
I0323 12:47:09.859991    9936 provision.go:84] configureAuth start
I0323 12:47:09.947249    9936 provision.go:133] copyHostCerts
I0323 12:47:09.947701    9936 provision.go:107] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/certs/ca-key.pem org=jenkins.vupgrade-20200323T124430.430384955-31589 san=[172.17.0.5 localhost 127.0.0.1]
I0323 12:47:10.160139    9936 provision.go:161] copyRemoteCerts
I0323 12:47:10.294443    9936 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0323 12:47:10.361879    9936 ssh_runner.go:244] found /etc/docker/ca.pem: 1038 bytes, modified at 2020-03-23 19:27:11.925756964 +0000 +0000
I0323 12:47:10.361926    9936 ssh_runner.go:158] Skipping copying /etc/docker/ca.pem as it already exists
I0323 12:47:10.366285    9936 ssh_runner.go:244] found /etc/docker/server.pem: 1164 bytes, modified at 2020-03-23 19:44:49.575354708 +0000 +0000
I0323 12:47:10.366575    9936 ssh_runner.go:174] Transferring 1164 bytes to /etc/docker/server.pem
I0323 12:47:10.367504    9936 ssh_runner.go:193] server.pem: copied 1164 bytes
I0323 12:47:10.421289    9936 ssh_runner.go:244] found /etc/docker/server-key.pem: 1679 bytes, modified at 2020-03-23 19:44:49.575354708 +0000 +0000
I0323 12:47:10.421631    9936 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0323 12:47:10.422605    9936 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0323 12:47:10.458917    9936 provision.go:87] configureAuth took 598.897036ms
I0323 12:47:10.458966    9936 ubuntu.go:187] setting minikube options for container-runtime
I0323 12:47:10.547427    9936 main.go:110] libmachine: Using SSH client type: native
I0323 12:47:10.547712    9936 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
I0323 12:47:10.547736    9936 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0323 12:47:10.692487    9936 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay
I0323 12:47:10.692523    9936 ubuntu.go:68] root file system type: overlay
I0323 12:47:10.692733    9936 provision.go:297] Updating docker unit: /lib/systemd/system/docker.service ...
I0323 12:47:10.755322    9936 main.go:110] libmachine: Using SSH client type: native
I0323 12:47:10.755716    9936 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
I0323 12:47:10.755894    9936 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0323 12:47:10.902769    9936 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0323 12:47:10.973166    9936 main.go:110] libmachine: Using SSH client type: native
I0323 12:47:10.973401    9936 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
I0323 12:47:10.973438    9936 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0323 12:47:11.130124    9936 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0323 12:47:11.130187    9936 machine.go:89] provisioned docker machine in 4.794183856s
I0323 12:47:11.130202    9936 start.go:189] post-start starting for "vupgrade-20200323T124430.430384955-31589" (driver="docker")
I0323 12:47:11.130212    9936 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0323 12:47:11.130230    9936 start.go:223] determining appropriate runner for "docker"
I0323 12:47:11.130265    9936 start.go:234] Returning KICRunner for "docker" driver
I0323 12:47:11.130380    9936 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0323 12:47:11.313219    9936 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/addons for local assets ...
I0323 12:47:11.313336    9936 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/files for local assets ...
I0323 12:47:11.313538    9936 filesync.go:141] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/files/etc/test/nested/copy/31589/hosts -> hosts in /etc/test/nested/copy/31589
I0323 12:47:11.313625    9936 kic_runner.go:91] Run: sudo mkdir -p /etc/test/nested/copy/31589
I0323 12:47:11.983132    9936 start.go:192] post-start completed in 852.905695ms
I0323 12:47:11.983167    9936 fix.go:62] fixHost completed within 6.500996074s
I0323 12:47:11.983178    9936 start.go:77] releasing machines lock for "vupgrade-20200323T124430.430384955-31589", held for 6.501064491s
I0323 12:47:12.061084    9936 kic_runner.go:91] Run: nslookup kubernetes.io -type=ns
I0323 12:47:12.254797    9936 kic_runner.go:91] Run: curl -sS https://k8s.gcr.io/
I0323 12:47:12.468472    9936 profile.go:138] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/config.json ...
I0323 12:47:12.468843    9936 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0323 12:47:12.668033    9936 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0323 12:47:13.867414    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 sudo systemctl stop -f containerd]: (1.199344707s)
I0323 12:47:13.867538    9936 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0323 12:47:14.040901    9936 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio
I0323 12:47:14.263345    9936 kic_runner.go:91] Run: sudo systemctl start docker
I0323 12:47:15.176581    9936 kic_runner.go:91] Run: docker version --format {{.Server.Version}}
I0323 12:47:15.575904    9936 settings.go:123] acquiring lock: {Name:mk7844b2b49fc7069e4585a9e36d85afd163b80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:15.576037    9936 settings.go:131] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
I0323 12:47:15.579748    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig: {Name:mk4ca215cb999251de8f1db1749cccc5a39c84a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:15.580618    9936 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0323 12:47:15.800536    9936 docker.go:363] Got preloaded images: -- stdout --
busybox:latest
kubernetesui/dashboard:v2.0.0-beta8
kindest/kindnetd:0.5.3
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/kube-proxy-amd64:v1.11.10
k8s.gcr.io/kube-controller-manager-amd64:v1.11.10
k8s.gcr.io/kube-apiserver-amd64:v1.11.10
k8s.gcr.io/kube-scheduler-amd64:v1.11.10
k8s.gcr.io/coredns:1.1.3
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
k8s.gcr.io/pause:latest
-- /stdout --
I0323 12:47:15.800563    9936 docker.go:368] k8s.gcr.io/kube-proxy:v1.18.0-rc.1 wasn't preloaded
I0323 12:47:15.800574    9936 cache_images.go:72] LoadImages start: [k8s.gcr.io/kube-proxy:v1.18.0-rc.1 k8s.gcr.io/kube-scheduler:v1.18.0-rc.1 k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1 k8s.gcr.io/kube-apiserver:v1.18.0-rc.1 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 kubernetesui/dashboard:v2.0.0-rc6 kubernetesui/metrics-scraper:v1.0.2]
I0323 12:47:15.808250    9936 image.go:112] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0-rc.1
I0323 12:47:15.808549    9936 image.go:112] retrieving image: k8s.gcr.io/coredns:1.6.7
I0323 12:47:15.808644    9936 image.go:112] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0-rc.1
I0323 12:47:15.808770    9936 image.go:112] retrieving image: kubernetesui/dashboard:v2.0.0-rc6
I0323 12:47:15.808886    9936 image.go:112] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0323 12:47:15.809090    9936 image.go:112] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0-rc.1
I0323 12:47:15.809528    9936 image.go:112] retrieving image: k8s.gcr.io/pause:3.2
I0323 12:47:15.809542    9936 image.go:112] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1
I0323 12:47:15.809747    9936 image.go:112] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0323 12:47:15.810853    9936 image.go:120] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
I0323 12:47:15.810994    9936 image.go:112] retrieving image: kubernetesui/metrics-scraper:v1.0.2
I0323 12:47:15.811279    9936 image.go:120] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0-rc.1: Error response from daemon: reference does not exist
I0323 12:47:15.811400    9936 image.go:120] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0-rc.1: Error response from daemon: reference does not exist
I0323 12:47:15.811512    9936 image.go:120] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0-rc.1: Error response from daemon: reference does not exist
I0323 12:47:15.811645    9936 image.go:120] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v1.8.1: Error response from daemon: reference does not exist
I0323 12:47:15.811780    9936 image.go:120] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1: Error response from daemon: reference does not exist
I0323 12:47:15.811657    9936 image.go:120] daemon lookup for kubernetesui/dashboard:v2.0.0-rc6: Error response from daemon: reference does not exist
I0323 12:47:15.812546    9936 image.go:120] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
I0323 12:47:15.812836    9936 image.go:120] daemon lookup for kubernetesui/metrics-scraper:v1.0.2: Error response from daemon: reference does not exist
I0323 12:47:15.812832    9936 image.go:120] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
I0323 12:47:16.032693    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0323 12:47:16.122196    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
I0323 12:47:16.122691    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
I0323 12:47:16.127952    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0-rc.1
I0323 12:47:16.207923    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0-rc.1
I0323 12:47:16.243945    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/pause:3.2
I0323 12:47:16.259844    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1
I0323 12:47:16.352738    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0-rc.1
I0323 12:47:16.844509    9936 cache_images.go:99] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0323 12:47:16.844545    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
I0323 12:47:16.984018    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} kubernetesui/dashboard:v2.0.0-rc6
I0323 12:47:17.135231    9936 kic_runner.go:91] Run: docker inspect --format {{.Id}} kubernetesui/metrics-scraper:v1.0.2
I0323 12:47:17.175408    9936 cache_images.go:99] "k8s.gcr.io/kube-apiserver:v1.18.0-rc.1" needs transfer: "k8s.gcr.io/kube-apiserver:v1.18.0-rc.1" does not exist at hash "5347d260989ad371337d6dca1dde0a8d17ef19e28b4e01ba6da051976bad4201" in container runtime
I0323 12:47:17.175442    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0-rc.1
I0323 12:47:17.316146    9936 cache_images.go:99] "k8s.gcr.io/coredns:1.6.7" needs transfer: "k8s.gcr.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0323 12:47:17.316179    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
I0323 12:47:17.319904    9936 cache_images.go:99] "k8s.gcr.io/kube-scheduler:v1.18.0-rc.1" needs transfer: "k8s.gcr.io/kube-scheduler:v1.18.0-rc.1" does not exist at hash "bce13e0cc95a6db4767e9aff1544097e3f61c62d80993f0638ca4c45b85417c0" in container runtime
I0323 12:47:17.319931    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0-rc.1
I0323 12:47:17.388598    9936 cache_images.go:99] "k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.18.0-rc.1" does not exist at hash "b4f6b0bffa351b7981a66489d1bd6fdc5d5842b0d3db8cd9bd6fa17ce99c16fc" in container runtime
I0323 12:47:17.388630    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0-rc.1
I0323 12:47:17.519072    9936 cache_images.go:99] "k8s.gcr.io/kube-proxy:v1.18.0-rc.1" needs transfer: "k8s.gcr.io/kube-proxy:v1.18.0-rc.1" does not exist at hash "189d8a10c70babbd376ade0a084242f567b89a7b5735d274ca0b8086d51548d9" in container runtime
I0323 12:47:17.519105    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0-rc.1
I0323 12:47:17.574468    9936 cache_images.go:99] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0323 12:47:17.574501    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/pause_3.2
I0323 12:47:17.864465    9936 docker.go:153] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
I0323 12:47:17.864565    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/etcd_3.4.3-0
I0323 12:47:18.745044    9936 cache_images.go:99] "kubernetesui/dashboard:v2.0.0-rc6" needs transfer: "kubernetesui/dashboard:v2.0.0-rc6" does not exist at hash "cdc71b5a8a0eeb73b47a23d067d8345d8bea4932028fed34509db9a7266f2080" in container runtime
I0323 12:47:18.745074    9936 cache_images.go:188] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6
I0323 12:47:20.615534    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker inspect --format {{.Id}} kubernetesui/metrics-scraper:v1.0.2]: (3.480264087s)
I0323 12:47:26.846556    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/etcd_3.4.3-0]: (8.981960532s)
I0323 12:47:26.846605    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache
I0323 12:47:26.846630    9936 docker.go:153] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.0-rc.1
I0323 12:47:26.846709    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.18.0-rc.1
I0323 12:47:31.850774    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/kube-apiserver_v1.18.0-rc.1]: (5.004035034s)
I0323 12:47:31.850821    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0-rc.1 from cache
I0323 12:47:31.850839    9936 docker.go:153] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0-rc.1
I0323 12:47:31.850913    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.18.0-rc.1
I0323 12:47:33.429709    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/kube-scheduler_v1.18.0-rc.1]: (1.578672353s)
I0323 12:47:33.429758    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0-rc.1 from cache
I0323 12:47:33.429778    9936 docker.go:153] Loading image: /var/lib/minikube/images/coredns_1.6.7
I0323 12:47:33.429873    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/coredns_1.6.7
I0323 12:47:37.784175    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/coredns_1.6.7]: (4.354273419s)
I0323 12:47:37.784220    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 from cache
I0323 12:47:37.784239    9936 docker.go:153] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0-rc.1
I0323 12:47:37.784318    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0-rc.1
I0323 12:47:40.668067    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0-rc.1]: (2.883723433s)
I0323 12:47:40.668113    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0-rc.1 from cache
I0323 12:47:40.668132    9936 docker.go:153] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.0-rc.1
I0323 12:47:40.668225    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.18.0-rc.1
I0323 12:47:43.950009    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/kube-proxy_v1.18.0-rc.1]: (3.281748699s)
I0323 12:47:43.950104    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0-rc.1 from cache
I0323 12:47:43.950164    9936 docker.go:153] Loading image: /var/lib/minikube/images/pause_3.2
I0323 12:47:43.950243    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/pause_3.2
I0323 12:47:44.556174    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
I0323 12:47:44.556217    9936 docker.go:153] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-rc6
I0323 12:47:44.556296    9936 kic_runner.go:91] Run: docker load -i /var/lib/minikube/images/dashboard_v2.0.0-rc6
I0323 12:47:51.197608    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 docker load -i /var/lib/minikube/images/dashboard_v2.0.0-rc6]: (6.641284685s)
I0323 12:47:51.197657    9936 cache_images.go:210] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 from cache
I0323 12:47:51.197677    9936 cache_images.go:106] Successfully loaded all cached images
I0323 12:47:51.197686    9936 cache_images.go:76] LoadImages completed in 35.397091324s
I0323 12:47:51.197747    9936 kubeadm.go:126] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.5 APIServerPort:8443 KubernetesVersion:v1.18.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd ClusterName:vupgrade-20200323T124430.430384955-31589 NodeName:vupgrade-20200323T124430.430384955-31589 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.5"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.5 ControlPlaneAddress:172.17.0.5}
I0323 12:47:51.197908    9936 kubeadm.go:130] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.5
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "vupgrade-20200323T124430.430384955-31589"
kubeletExtraArgs:
node-ip: 172.17.0.5
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.5"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 172.17.0.5:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0-rc.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
I0323 12:47:51.197990    9936 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}}
I0323 12:47:51.522132    9936 kubeadm.go:565] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0-rc.1/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=vupgrade-20200323T124430.430384955-31589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.5 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
config:
{KubernetesVersion:v1.18.0-rc.1 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:}
I0323 12:47:51.522275    9936 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0-rc.1
I0323 12:47:51.714872    9936 binaries.go:45] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.0-rc.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.18.0-rc.1': No such file or directory
Initiating transfer...
I0323 12:47:51.714978    9936 kic_runner.go:91] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.0-rc.1
I0323 12:47:51.913279    9936 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet"
W0323 12:47:52.127336    9936 binaries.go:55] unable to stop kubelet: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet": exit status 1
stdout:
stderr:
I0323 12:47:52.127639    9936 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubectl.sha256
I0323 12:47:52.127759    9936 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubelet.sha256
I0323 12:47:52.127922    9936 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0-rc.1/bin/linux/amd64/kubeadm.sha256
I0323 12:47:53.737892    9936 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0323 12:47:55.268318    9936 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0323 12:47:55.478446    9936 kic_runner.go:91] Run: /bin/bash -c "sudo mv /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo mv /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0323 12:47:55.907341    9936 certs.go:51] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589 for IP: 172.17.0.5
I0323 12:47:55.907422    9936 certs.go:167] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/ca.key
I0323 12:47:55.907451    9936 certs.go:167] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/proxy-client-ca.key
I0323 12:47:55.907518    9936 certs.go:265] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.key
I0323 12:47:55.907528    9936 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.crt with IP's: []
I0323 12:47:56.139913    9936 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.crt ...
I0323 12:47:56.139952    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.crt: {Name:mk9857972dc7f8065497e33162c17d0107002aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:56.141120    9936 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.key ...
I0323 12:47:56.141154    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.key: {Name:mk79a5dab5401d9d2dc6330f6e78b86ae18cfb86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:56.141319    9936 certs.go:265] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.key.deff8e7f
I0323 12:47:56.141333    9936 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.crt.deff8e7f with IP's: [172.17.0.5 10.96.0.1 127.0.0.1 10.0.0.1]
I0323 12:47:56.428163    9936 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.crt.deff8e7f ...
I0323 12:47:56.428209    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.crt.deff8e7f: {Name:mk05d0adf139cc9f282d1be4a64bd705776450e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:56.428472    9936 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.key.deff8e7f ...
I0323 12:47:56.428493    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.key.deff8e7f: {Name:mk3b220ba414fe56ac95d2a3c72fb83c9a9988b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:56.428640    9936 certs.go:276] copying /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.crt.deff8e7f -> /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.crt
I0323 12:47:56.428774    9936 certs.go:280] copying /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.key.deff8e7f -> /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/apiserver.key
I0323 12:47:56.428881    9936 certs.go:265] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.key
I0323 12:47:56.428899    9936 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.crt with IP's: []
I0323 12:47:56.728637    9936 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.crt ...
I0323 12:47:56.728678    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.crt: {Name:mkce54846149d71dcbe1902b2e8cc535db47891d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:47:56.728938    9936 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.key ...
I0323 12:47:56.728963    9936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/proxy-client.key: {Name:mkb59d43e74db00a68a21152fc67f12afd1f3bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 12:48:01.735812    9936 kic_runner.go:91] Run: openssl version
I0323 12:48:01.971897    9936 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/31589.pem && ln -fs /usr/share/ca-certificates/31589.pem /etc/ssl/certs/31589.pem"
I0323 12:48:02.231967    9936 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31589.pem
I0323 12:48:02.507131    9936 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/31589.pem /etc/ssl/certs/51391683.0"
I0323 12:48:02.799811    9936 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0323 12:48:03.096139    9936 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0323 12:48:03.314277    9936 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0323 12:48:03.524632    9936 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0323 12:48:03.758412    9936 kubeadm.go:369] restartCluster start
I0323 12:48:03.758631    9936 kic_runner.go:91] Run: sudo test -d /data/minikube
I0323 12:48:03.967502    9936 kubeadm.go:148] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
stdout:
stderr:
I0323 12:48:04.073171    9936 kapi.go:58] client config for vupgrade-20200323T124430.430384955-31589: &rest.Config{Host:"https://127.0.0.1:32939", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T124430.430384955-31589/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/profiles/vupgrade-20200323T12
4430.430384955-31589/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15ec8c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0323 12:48:04.075982    9936 kic_runner.go:91] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0323 12:48:04.316469    9936 kubeadm.go:345] needs reset: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml  2020-03-23 19:45:52.000000000 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new  2020-03-23 19:47:54.000000000 +0000
@@ -1,17 +1,38 @@
-apiVersion: kubeadm.k8s.io/v1alpha1
-kind: MasterConfiguration
-noTaintMaster: true
-api:
+apiVersion: kubeadm.k8s.io/v1beta2
+kind: InitConfiguration
+localAPIEndpoint:
advertiseAddress: 172.17.0.5
bindPort: 8443
-  controlPlaneEndpoint: localhost
-kubernetesVersion: v1.11.10
+bootstrapTokens:
+  - groups:
+      - system:bootstrappers:kubeadm:default-node-token
+    ttl: 24h0m0s
+    usages:
+      - signing
+      - authentication
+nodeRegistration:
+  criSocket: /var/run/dockershim.sock
+  name: "vupgrade-20200323T124430.430384955-31589"
+  kubeletExtraArgs:
+    node-ip: 172.17.0.5
+  taints: []
+---
+apiVersion: kubeadm.k8s.io/v1beta2
+kind: ClusterConfiguration
+apiServer:
+  certSANs: ["127.0.0.1", "localhost", "172.17.0.5"]
+  extraArgs:
+    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
+clusterName: mk
+controlPlaneEndpoint: 172.17.0.5:8443
+dns:
+  type: CoreDNS
+etcd:
+  local:
+    dataDir: /var/lib/minikube/etcd
+kubernetesVersion: v1.18.0-rc.1
networking:
+  dnsDomain: cluster.local
+  podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
-etcd:
-  dataDir: /var/lib/minikube/etcd
-nodeName: "m01"
-apiServerCertSANs: ["127.0.0.1", "localhost", "172.17.0.5"]
-apiServerExtraArgs:
-  enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
-- /stdout --
I0323 12:48:04.316588    9936 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.5:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0323 12:48:04.567885    9936 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.5:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0323 12:48:04.910773    9936 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.5:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0323 12:48:05.275936    9936 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.5:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0323 12:48:05.627255    9936 kic_runner.go:91] Run: sudo mv /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0323 12:48:06.003808    9936 kubeadm.go:425] resetting cluster from /var/tmp/minikube/kubeadm.yaml
I0323 12:48:06.003886    9936 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0323 12:48:06.521435    9936 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0323 12:48:08.918812    9936 kic_runner.go:118] Done: [docker exec --privileged vupgrade-20200323T124430.430384955-31589 /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml]: (2.397338315s)
I0323 12:48:08.918921    9936 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0323 12:48:09.176574    9936 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0323 12:48:09.578429    9936 kverify.go:50] waiting for apiserver process to appear ...
I0323 12:48:09.578533    9936 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 12:48:09.878213    9936 kverify.go:69] duration metric: took 299.78274ms to wait for apiserver process to appear ...
I0323 12:48:09.878245    9936 kverify.go:147] waiting for kube-system pods to appear ...
I0323 12:48:18.456363    9936 kverify.go:165] 5 kube-system pods found
I0323 12:48:18.456484    9936 kverify.go:167] "coredns-78fcdf6894-7kqft" [0565939b-6d3f-11ea-9be4-024280a33b6e] Running
I0323 12:48:18.456546    9936 kverify.go:167] "coredns-78fcdf6894-kw22c" [0561c9a5-6d3f-11ea-9be4-024280a33b6e] Running
I0323 12:48:18.456590    9936 kverify.go:167] "kindnet-fbqhm" [05729e91-6d3f-11ea-9be4-024280a33b6e] Running
I0323 12:48:18.456632    9936 kverify.go:167] "kube-proxy-4vtfv" [0572d57c-6d3f-11ea-9be4-024280a33b6e] Running
I0323 12:48:18.456680    9936 kverify.go:167] "storage-provisioner" [055606d2-6d3f-11ea-9be4-024280a33b6e] Running
I0323 12:48:18.456721    9936 kverify.go:178] duration metric: took 8.578465405s to wait for pod list to return data ...
I0323 12:48:18.456856    9936 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0323 12:48:19.063345    9936 kubeadm.go:373] restartCluster took 15.3049058s
I0323 12:48:19.063481    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 12:48:19.499146    9936 logs.go:203] 3 containers: [b9b058b22d02 10e4cc4e5e81 a2467a803340]
I0323 12:48:19.499267    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 12:48:19.864426    9936 logs.go:203] 3 containers: [1b67cc7bedb6 f70214037574 eec47cffd82c]
I0323 12:48:19.864557    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 12:48:20.274419    9936 logs.go:203] 2 containers: [b82f8779a911 c8542f0372b1]
I0323 12:48:20.274536    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 12:48:20.605102    9936 logs.go:203] 3 containers: [1fcb859c177a 345ca19d2481 ec7654399148]
I0323 12:48:20.605222    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 12:48:20.956021    9936 logs.go:203] 1 containers: [60ecd5738bd1]
I0323 12:48:20.956151    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0323 12:48:21.263799    9936 logs.go:203] 0 containers: []
W0323 12:48:21.263829    9936 logs.go:205] No container was found matching "kubernetes-dashboard"
I0323 12:48:21.263912    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 12:48:21.544755    9936 logs.go:203] 1 containers: [56f0b07777a9]
I0323 12:48:21.544883    9936 kic_runner.go:91] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 12:48:21.832542    9936 logs.go:203] 3 containers: [4b01d60cbde5 112be7443027 aded654a680b]
I0323 12:48:21.832599    9936 logs.go:117] Gathering logs for kube-apiserver [10e4cc4e5e81] ...
I0323 12:48:21.832659    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 10e4cc4e5e81"
I0323 12:48:22.088208    9936 logs.go:117] Gathering logs for etcd [eec47cffd82c] ...
I0323 12:48:22.088303    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 eec47cffd82c"
I0323 12:48:22.336984    9936 logs.go:117] Gathering logs for kube-controller-manager [aded654a680b] ...
I0323 12:48:22.337088    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 aded654a680b"
I0323 12:48:22.626468    9936 logs.go:117] Gathering logs for storage-provisioner [56f0b07777a9] ...
I0323 12:48:22.626581    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 56f0b07777a9"
I0323 12:48:22.915228    9936 logs.go:117] Gathering logs for kubelet ...
I0323 12:48:22.915327    9936 kic_runner.go:91] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0323 12:48:23.148995    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:13 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:13.822822    1175 pod_workers.go:191] Error syncing pod 880e7571d217187114090f82749b08a9 ("kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"
W0323 12:48:23.152401    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:15 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:15.720574    1175 pod_workers.go:191] Error syncing pod 880e7571d217187114090f82749b08a9 ("kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-vupgrade-20200323t124430.430384955-31589_kube-system(880e7571d217187114090f82749b08a9)"
W0323 12:48:23.154051    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:16 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:16.553076    1175 pod_workers.go:191] Error syncing pod 08bc5143665e3d99033b8cf7b7179e9c ("kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"
W0323 12:48:23.154719    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:16 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:16.608557    1175 pod_workers.go:191] Error syncing pod 3c853d9152c593aabdb79b7b49733896 ("kube-scheduler-vupgrade-20200323t124430.430384955-31589_kube-system(3c853d9152c593aabdb79b7b49733896)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-vupgrade-20200323t124430.430384955-31589_kube-system(3c853d9152c593aabdb79b7b49733896)"
W0323 12:48:23.155086    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:16 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:16.648041    1175 pod_workers.go:191] Error syncing pod e2909b3d067f74ea8cd61a4106894b62 ("etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"
W0323 12:48:23.157470    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:17 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:17.618245    1175 pod_workers.go:191] Error syncing pod 08bc5143665e3d99033b8cf7b7179e9c ("kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"
W0323 12:48:23.157904    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:17 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:17.622309    1175 pod_workers.go:191] Error syncing pod 3c853d9152c593aabdb79b7b49733896 ("kube-scheduler-vupgrade-20200323t124430.430384955-31589_kube-system(3c853d9152c593aabdb79b7b49733896)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-vupgrade-20200323t124430.430384955-31589_kube-system(3c853d9152c593aabdb79b7b49733896)"
W0323 12:48:23.158309    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:17 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:17.656733    1175 pod_workers.go:191] Error syncing pod e2909b3d067f74ea8cd61a4106894b62 ("etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"
W0323 12:48:23.159137    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:18 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:18.027203    1175 kubelet_node_status.go:92] Unable to register node "vupgrade-20200323t124430.430384955-31589" with API server: nodes "vupgrade-20200323t124430.430384955-31589" is forbidden: node "m01" cannot modify node "vupgrade-20200323t124430.430384955-31589"
W0323 12:48:23.165741    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:18 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:18.441859    1175 csi_plugin.go:271] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: csinodes.storage.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get csinodes.storage.k8s.io at the cluster scope
W0323 12:48:23.168147    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:18 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:18.802740    1175 pod_workers.go:191] Error syncing pod e2909b3d067f74ea8cd61a4106894b62 ("etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-vupgrade-20200323t124430.430384955-31589_kube-system(e2909b3d067f74ea8cd61a4106894b62)"
W0323 12:48:23.169690    9936 logs.go:132] Found kubelet problem: Mar 23 19:48:19 vupgrade-20200323T124430.430384955-31589 kubelet[1175]: E0323 19:48:19.276848    1175 pod_workers.go:191] Error syncing pod 08bc5143665e3d99033b8cf7b7179e9c ("kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-vupgrade-20200323t124430.430384955-31589_kube-system(08bc5143665e3d99033b8cf7b7179e9c)"
I0323 12:48:23.189664    9936 logs.go:117] Gathering logs for dmesg ...
I0323 12:48:23.189772    9936 kic_runner.go:91] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 12:48:23.397884    9936 logs.go:117] Gathering logs for kube-apiserver [b9b058b22d02] ...
I0323 12:48:23.397988    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 b9b058b22d02"
W0323 12:48:23.686136    9936 logs.go:132] Found kube-apiserver [b9b058b22d02] problem: Error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
I0323 12:48:23.723492    9936 logs.go:117] Gathering logs for kube-apiserver [a2467a803340] ...
I0323 12:48:23.723617    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 a2467a803340"
I0323 12:48:24.248274    9936 logs.go:117] Gathering logs for kube-scheduler [345ca19d2481] ...
I0323 12:48:24.248379    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 345ca19d2481"
I0323 12:48:24.504733    9936 logs.go:117] Gathering logs for kube-proxy [60ecd5738bd1] ...
I0323 12:48:24.504821    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 60ecd5738bd1"
I0323 12:48:24.734094    9936 logs.go:117] Gathering logs for kube-controller-manager [4b01d60cbde5] ...
I0323 12:48:24.734189    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 4b01d60cbde5"
W0323 12:48:24.992616    9936 logs.go:132] Found kube-controller-manager [4b01d60cbde5] problem: failed to create listener: failed to listen on 0.0.0.0:10252: listen tcp 0.0.0.0:10252: bind: address already in use
I0323 12:48:24.992654    9936 logs.go:117] Gathering logs for etcd [f70214037574] ...
I0323 12:48:24.992715    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 f70214037574"
I0323 12:48:25.257531    9936 logs.go:117] Gathering logs for coredns [b82f8779a911] ...
I0323 12:48:25.257627    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 b82f8779a911"
I0323 12:48:25.565268    9936 logs.go:117] Gathering logs for coredns [c8542f0372b1] ...
I0323 12:48:25.565354    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 c8542f0372b1"
I0323 12:48:25.853824    9936 logs.go:117] Gathering logs for kube-scheduler [1fcb859c177a] ...
I0323 12:48:25.853959    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 1fcb859c177a"
W0323 12:48:26.144165    9936 logs.go:132] Found kube-scheduler [1fcb859c177a] problem: failed to create listener: failed to listen on 0.0.0.0:10251: listen tcp 0.0.0.0:10251: bind: address already in use
I0323 12:48:26.144200    9936 logs.go:117] Gathering logs for kube-scheduler [ec7654399148] ...
I0323 12:48:26.144270    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 ec7654399148"
I0323 12:48:26.453275    9936 logs.go:117] Gathering logs for describe nodes ...
I0323 12:48:26.453378    9936 kic_runner.go:91] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0-rc.1/kubectl describe node -A --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 12:48:36.805928    9936 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0-rc.1/kubectl describe node -A --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0-rc.1/kubectl describe node -A --kubeconfig=/var/lib/minikube/kubeconfig": exit status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output: 
** stderr ** 
Unable to connect to the server: net/http: TLS handshake timeout
** /stderr **
I0323 12:48:36.805967    9936 logs.go:117] Gathering logs for etcd [1b67cc7bedb6] ...
I0323 12:48:36.806031    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 1b67cc7bedb6"
W0323 12:48:37.084516    9936 logs.go:132] Found etcd [1b67cc7bedb6] problem: 2020-03-23 19:48:15.585390 C | etcdmain: listen tcp 127.0.0.1:2379: bind: address already in use
I0323 12:48:37.084631    9936 logs.go:117] Gathering logs for kube-controller-manager [112be7443027] ...
I0323 12:48:37.084843    9936 kic_runner.go:91] Run: /bin/bash -c "docker logs --tail 400 112be7443027"
I0323 12:48:37.382452    9936 logs.go:117] Gathering logs for Docker ...
I0323 12:48:37.382534    9936 kic_runner.go:91] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 12:48:37.636285    9936 logs.go:117] Gathering logs for container status ...
I0323 12:48:37.636429    9936 kic_runner.go:91] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0323 12:48:37.867901    9936 exit.go:101] Error starting cluster: addon phase cmd:"/bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml\"": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1
stdout:
stderr:
W0323 19:48:18.827477    3780 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase addon/coredns: couldn't retrieve DNS addon deployments: an error on the server ("apiserver is shutting down.") has prevented the request from succeeding (get deployments.apps)
To see the stack trace of this error execute with --v=5 or higher
* 
X Error starting cluster: addon phase cmd:"/bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml\"": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0-rc.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1
stdout:
stderr:
W0323 19:48:18.827477    3780 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase addon/coredns: couldn't retrieve DNS addon deployments: an error on the server ("apiserver is shutting down.") has prevented the request from succeeding (get deployments.apps)
To see the stack trace of this error execute with --v=5 or higher
* 
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
** /stderr **
version_upgrade_test.go:100: [out/minikube-linux-amd64 start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.18.0-rc.1 --alsologtostderr -v=1 --driver=docker ] failed: exit status 70
version_upgrade_test.go:103: (dbg) Run:  kubectl --context vupgrade-20200323T124430.430384955-31589 version --output=json
version_upgrade_test.go:103: (dbg) Done: kubectl --context vupgrade-20200323T124430.430384955-31589 version --output=json: (7.987102498s)
version_upgrade_test.go:125: (dbg) Run:  /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:125: (dbg) Non-zero exit: /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker : exit status 78 (438.403552ms)
-- stdout --
* [vupgrade-20200323T124430.430384955-31589] minikube v1.8.2 on Debian 9.12
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube
- MINIKUBE_LOCATION=7165
* Using the docker driver based on existing profile
! You have selected Kubernetes v1.11.10, but the existing cluster is running Kubernetes v1.18.0-rc.1
-- /stdout --
** stderr ** 
I0323 12:48:45.934007   29903 notify.go:125] Checking for updates...
I0323 12:48:46.143376   29903 start.go:252] hostinfo: {"hostname":"podman-integration-slave9","uptime":16284,"bootTime":1584976642,"procs":379,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
I0323 12:48:46.144480   29903 start.go:262] virtualization: kvm host
I0323 12:48:46.145374   29903 driver.go:226] Setting default libvirt URI to qemu:///system
I0323 12:48:46.296872   29903 start.go:299] selected driver: docker
I0323 12:48:46.296887   29903 start.go:488] validating driver "docker" against &{Name:vupgrade-20200323T124430.430384955-31589 KeepContext:false EmbedCerts:false MinikubeISO: Memory:3700 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0-rc.1 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Va
lue:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0-rc.1 ControlPlane:true Worker:true}] Addons:map[]}
I0323 12:48:46.296973   29903 start.go:494] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0323 12:48:46.297653   29903 start.go:958] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
X Non-destructive downgrades are not supported, but you can proceed with one of the following options:
1) Recreate the cluster with Kubernetes v1.11.10, by running:
minikube delete -p vupgrade-20200323T124430.430384955-31589
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.11.10
2) Create a second cluster with Kubernetes v1.11.10, by running:
minikube start -p vupgrade-20200323T124430.430384955-315892 --kubernetes-version=1.11.10
3) Use the existing cluster at version Kubernetes v1.18.0-rc.1, by running:
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.18.0-rc.1
** /stderr **
version_upgrade_test.go:125: (dbg) Run:  /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:125: (dbg) Non-zero exit: /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker : exit status 78 (402.958911ms)
-- stdout --
* [vupgrade-20200323T124430.430384955-31589] minikube v1.8.2 on Debian 9.12
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube
- MINIKUBE_LOCATION=7165
* Using the docker driver based on existing profile
! You have selected Kubernetes v1.11.10, but the existing cluster is running Kubernetes v1.18.0-rc.1
-- /stdout --
** stderr ** 
I0323 12:48:47.176663   30149 notify.go:125] Checking for updates...
I0323 12:48:47.387881   30149 start.go:252] hostinfo: {"hostname":"podman-integration-slave9","uptime":16285,"bootTime":1584976642,"procs":381,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
I0323 12:48:47.388698   30149 start.go:262] virtualization: kvm host
I0323 12:48:47.389514   30149 driver.go:226] Setting default libvirt URI to qemu:///system
I0323 12:48:47.518905   30149 start.go:299] selected driver: docker
I0323 12:48:47.518918   30149 start.go:488] validating driver "docker" against &{Name:vupgrade-20200323T124430.430384955-31589 KeepContext:false EmbedCerts:false MinikubeISO: Memory:3700 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0-rc.1 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Va
lue:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0-rc.1 ControlPlane:true Worker:true}] Addons:map[]}
I0323 12:48:47.519017   30149 start.go:494] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0323 12:48:47.520614   30149 start.go:958] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
X Non-destructive downgrades are not supported, but you can proceed with one of the following options:
1) Recreate the cluster with Kubernetes v1.11.10, by running:
minikube delete -p vupgrade-20200323T124430.430384955-31589
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.11.10
2) Create a second cluster with Kubernetes v1.11.10, by running:
minikube start -p vupgrade-20200323T124430.430384955-315892 --kubernetes-version=1.11.10
3) Use the existing cluster at version Kubernetes v1.18.0-rc.1, by running:
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.18.0-rc.1
** /stderr **
version_upgrade_test.go:125: (dbg) Run:  /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:125: (dbg) Non-zero exit: /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker : exit status 78 (326.994329ms)
-- stdout --
* [vupgrade-20200323T124430.430384955-31589] minikube v1.8.2 on Debian 9.12
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube
- MINIKUBE_LOCATION=7165
* Using the docker driver based on existing profile
! You have selected Kubernetes v1.11.10, but the existing cluster is running Kubernetes v1.18.0-rc.1
-- /stdout --
** stderr ** 
I0323 12:48:49.023393   30889 notify.go:125] Checking for updates...
I0323 12:48:49.184186   30889 start.go:252] hostinfo: {"hostname":"podman-integration-slave9","uptime":16287,"bootTime":1584976642,"procs":383,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
I0323 12:48:49.184915   30889 start.go:262] virtualization: kvm host
I0323 12:48:49.185464   30889 driver.go:226] Setting default libvirt URI to qemu:///system
I0323 12:48:49.300413   30889 start.go:299] selected driver: docker
I0323 12:48:49.300428   30889 start.go:488] validating driver "docker" against &{Name:vupgrade-20200323T124430.430384955-31589 KeepContext:false EmbedCerts:false MinikubeISO: Memory:3700 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0-rc.1 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Va
lue:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0-rc.1 ControlPlane:true Worker:true}] Addons:map[]}
I0323 12:48:49.300610   30889 start.go:494] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0323 12:48:49.301530   30889 start.go:958] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
X Non-destructive downgrades are not supported, but you can proceed with one of the following options:
1) Recreate the cluster with Kubernetes v1.11.10, by running:
minikube delete -p vupgrade-20200323T124430.430384955-31589
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.11.10
2) Create a second cluster with Kubernetes v1.11.10, by running:
minikube start -p vupgrade-20200323T124430.430384955-315892 --kubernetes-version=1.11.10
3) Use the existing cluster at version Kubernetes v1.18.0-rc.1, by running:
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.18.0-rc.1
** /stderr **
version_upgrade_test.go:125: (dbg) Run:  /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:125: (dbg) Non-zero exit: /tmp/minikube-release.134122834.exe start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.11.10 --alsologtostderr -v=1 --driver=docker : exit status 78 (337.894019ms)
-- stdout --
* [vupgrade-20200323T124430.430384955-31589] minikube v1.8.2 on Debian 9.12
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-7165-29786-ba4e13437e282eac98b838e70e55d9c133e73efa/.minikube
- MINIKUBE_LOCATION=7165
* Using the docker driver based on existing profile
! You have selected Kubernetes v1.11.10, but the existing cluster is running Kubernetes v1.18.0-rc.1
-- /stdout --
** stderr ** 
I0323 12:48:51.122652   31243 notify.go:125] Checking for updates...
I0323 12:48:51.271296   31243 start.go:252] hostinfo: {"hostname":"podman-integration-slave9","uptime":16289,"bootTime":1584976642,"procs":382,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
I0323 12:48:51.271956   31243 start.go:262] virtualization: kvm host
I0323 12:48:51.272647   31243 driver.go:226] Setting default libvirt URI to qemu:///system
I0323 12:48:51.400717   31243 start.go:299] selected driver: docker
I0323 12:48:51.400732   31243 start.go:488] validating driver "docker" against &{Name:vupgrade-20200323T124430.430384955-31589 KeepContext:false EmbedCerts:false MinikubeISO: Memory:3700 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0-rc.1 ClusterName:vupgrade-20200323T124430.430384955-31589 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Va
lue:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.5 Port:8443 KubernetesVersion:v1.18.0-rc.1 ControlPlane:true Worker:true}] Addons:map[]}
I0323 12:48:51.400841   31243 start.go:494] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0323 12:48:51.401409   31243 start.go:958] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
X Non-destructive downgrades are not supported, but you can proceed with one of the following options:
1) Recreate the cluster with Kubernetes v1.11.10, by running:
minikube delete -p vupgrade-20200323T124430.430384955-31589
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.11.10
2) Create a second cluster with Kubernetes v1.11.10, by running:
minikube start -p vupgrade-20200323T124430.430384955-315892 --kubernetes-version=1.11.10
3) Use the existing cluster at version Kubernetes v1.18.0-rc.1, by running:
minikube start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=1.18.0-rc.1
** /stderr **
version_upgrade_test.go:134: (dbg) Run:  out/minikube-linux-amd64 start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.18.0-rc.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:134: (dbg) Done: out/minikube-linux-amd64 start -p vupgrade-20200323T124430.430384955-31589 --kubernetes-version=v1.18.0-rc.1 --alsologtostderr -v=1 --driver=docker : (43.548213237s)
version_upgrade_test.go:138: *** TestVersionUpgrade FAILED at 2020-03-23 12:49:34.968400144 -0700 PDT m=+1398.830089234
helpers.go:188: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p vupgrade-20200323T124430.430384955-31589
helpers.go:188: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p vupgrade-20200323T124430.430384955-31589: (1.210680357s)
helpers.go:194: <<< TestVersionUpgrade FAILED: start of post-mortem logs <<<
helpers.go:195: (dbg) Run:  out/minikube-linux-amd64 -p vupgrade-20200323T124430.430384955-31589 logs --problems
helpers.go:195: (dbg) Done: out/minikube-linux-amd64 -p vupgrade-20200323T124430.430384955-31589 logs --problems: (10.251909706s)
helpers.go:200: TestVersionUpgrade logs: * Problems detected in kube-controller-manager [ae9fac2c5d4c]:
- E0323 19:48:46.006910       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
- E0323 19:48:46.058090       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
* Problems detected in kubelet:
- Mar 23 19:49:20 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:20.519184    7254 controller.go:136] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
- Mar 23 19:49:20 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:20.567789    7254 kubelet_node_status.go:92] Unable to register node "vupgrade-20200323t124430.430384955-31589" with API server: nodes "vupgrade-20200323t124430.430384955-31589" is forbidden: node "m01" is not allowed to modify node "vupgrade-20200323t124430.430384955-31589"
- Mar 23 19:49:21 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:21.011776    7254 csi_plugin.go:271] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: csinodes.storage.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope: can only access CSINode with the same name as the requesting node
- Mar 23 19:49:22 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:22.259245    7254 kubelet_node_status.go:92] Unable to register node "vupgrade-20200323t124430.430384955-31589" with API server: nodes "vupgrade-20200323t124430.430384955-31589" is forbidden: node "m01" is not allowed to modify node "vupgrade-20200323t124430.430384955-31589"
- Mar 23 19:49:23 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:23.721265    7254 controller.go:136] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
- Mar 23 19:49:24 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:24.682626    7254 csi_plugin.go:271] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: csinodes.storage.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope: can only access CSINode with the same name as the requesting node
- Mar 23 19:49:25 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:25.495488    7254 kubelet_node_status.go:92] Unable to register node "vupgrade-20200323t124430.430384955-31589" with API server: nodes "vupgrade-20200323t124430.430384955-31589" is forbidden: node "m01" is not allowed to modify node "vupgrade-20200323t124430.430384955-31589"
- Mar 23 19:49:30 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:30.123762    7254 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
- Mar 23 19:49:31 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:31.947042    7254 kubelet_node_status.go:92] Unable to register node "vupgrade-20200323t124430.430384955-31589" with API server: nodes "vupgrade-20200323t124430.430384955-31589" is forbidden: node "m01" is not allowed to modify node "vupgrade-20200323t124430.430384955-31589"
- Mar 23 19:49:37 vupgrade-20200323T124430.430384955-31589 kubelet[7254]: E0323 19:49:37.125524    7254 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "vupgrade-20200323t124430.430384955-31589" is forbidden: User "system:node:m01" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node
helpers.go:202: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p vupgrade-20200323T124430.430384955-31589
helpers.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p vupgrade-20200323T124430.430384955-31589: exit status 2 (1.609183846s)
-- stdout --
Running
-- /stdout --
helpers.go:202: status error: exit status 2 (may be ok)
helpers.go:208: (dbg) Run:  kubectl --context vupgrade-20200323T124430.430384955-31589 get po -A --show-labels
helpers.go:208: (dbg) Done: kubectl --context vupgrade-20200323T124430.430384955-31589 get po -A --show-labels: (1.294324095s)
helpers.go:213: (dbg) kubectl --context vupgrade-20200323T124430.430384955-31589 get po -A --show-labels:
NAMESPACE     NAME                       READY   STATUS        RESTARTS   AGE    LABELS
kube-system   coredns-66bff467f8-f9sj7   0/1     Pending       0          7s     k8s-app=kube-dns,pod-template-hash=66bff467f8
kube-system   coredns-66bff467f8-qtr6v   0/1     Pending       0          7s     k8s-app=kube-dns,pod-template-hash=66bff467f8
kube-system   coredns-78fcdf6894-7kqft   1/1     Terminating   0          3m7s   k8s-app=kube-dns,pod-template-hash=3497892450
kube-system   coredns-78fcdf6894-kw22c   1/1     Running       0          3m7s   k8s-app=kube-dns,pod-template-hash=3497892450
kube-system   kindnet-fbqhm              1/1     Running       0          3m7s   app=kindnet,controller-revision-hash=3896162949,k8s-app=kindnet,pod-template-generation=1,tier=node
kube-system   kindnet-xd6sm              0/1     Pending       0          0s     app=kindnet,controller-revision-hash=3896162949,k8s-app=kindnet,pod-template-generation=1,tier=node
kube-system   kube-proxy-4vtfv           1/1     Terminating   0          3m6s   controller-revision-hash=3030754821,k8s-app=kube-proxy,pod-template-generation=1
kube-system   storage-provisioner        1/1     Running       0          3m7s   addonmanager.kubernetes.io/mode=Reconcile,integration-test=storage-provisioner
helpers.go:215: (dbg) Run:  kubectl --context vupgrade-20200323T124430.430384955-31589 describe node
helpers.go:219: (dbg) kubectl --context vupgrade-20200323T124430.430384955-31589 describe node:
Name:               m01
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=m01
kubernetes.io/os=linux
minikube.k8s.io/commit=eb13446e786c9ef70cb0a9f85a633194e62396a1
minikube.k8s.io/name=vupgrade-20200323T124430.430384955-31589
minikube.k8s.io/updated_at=2020_03_23T12_46_39_0700
minikube.k8s.io/version=v1.8.2
node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 23 Mar 2020 12:46:31 -0700
Taints:             <none>
Unschedulable:      false
Lease:
HolderIdentity:  <unset>
AcquireTime:     <unset>
RenewTime:       <unset>
Conditions:
Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
----             ------  -----------------                 ------------------                ------                       -------
OutOfDisk        False   Mon, 23 Mar 2020 12:46:51 -0700   Mon, 23 Mar 2020 12:46:27 -0700   KubeletHasSufficientDisk     kubelet has sufficient disk space available
MemoryPressure   False   Mon, 23 Mar 2020 12:46:51 -0700   Mon, 23 Mar 2020 12:46:27 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
DiskPressure     False   Mon, 23 Mar 2020 12:46:51 -0700   Mon, 23 Mar 2020 12:46:27 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
PIDPressure      False   Mon, 23 Mar 2020 12:46:51 -0700   Mon, 23 Mar 2020 12:46:27 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
Ready            True    Mon, 23 Mar 2020 12:46:51 -0700   Mon, 23 Mar 2020 12:46:27 -0700   KubeletReady                 kubelet is posting ready status
Addresses:
InternalIP:  172.17.0.5
Hostname:    m01
Capacity:
cpu:                4
ephemeral-storage:  515928484Ki
hugepages-1Gi:      0
hugepages-2Mi:      0
memory:             15404700Ki
pods:               110
Allocatable:
cpu:                4
ephemeral-storage:  475479690068
hugepages-1Gi:      0
hugepages-2Mi:      0
memory:             15302300Ki
pods:               110
System Info:
Machine ID:                 89ec74528efc47c1925dd1f0ba9e58fc
System UUID:                3e5e8290-f475-444b-a223-d254e38d2017
Boot ID:                    a602e373-d3f5-4bf6-a793-1f86dc247ed3
Kernel Version:             4.9.0-12-amd64
OS Image:                   Ubuntu 19.10
Operating System:           linux
Architecture:               amd64
Container Runtime Version:  docker://19.3.2
Kubelet Version:            v1.11.10
Kube-Proxy Version:         v1.11.10
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (7 in total)
Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
---------                   ----                        ------------  ----------  ---------------  -------------  ---
kube-system                 coredns-66bff467f8-f9sj7    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     7s
kube-system                 coredns-66bff467f8-qtr6v    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     7s
kube-system                 coredns-78fcdf6894-7kqft    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     3m7s
kube-system                 coredns-78fcdf6894-kw22c    100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     3m7s
kube-system                 kindnet-fbqhm               100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      3m7s
kube-system                 kube-proxy-4vtfv            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
kube-system                 storage-provisioner         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource           Requests    Limits
--------           --------    ------
cpu                500m (12%)  100m (2%)
memory             330Mi (2%)  730Mi (4%)
ephemeral-storage  0 (0%)      0 (0%)
Events:
Type    Reason                   Age                    From          Message
----    ------                   ----                   ----          -------
Normal  NodeHasSufficientDisk    3m38s (x6 over 3m38s)  kubelet, m01  Node m01 status is now: NodeHasSufficientDisk
Normal  NodeHasSufficientMemory  3m38s (x6 over 3m38s)  kubelet, m01  Node m01 status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    3m38s (x6 over 3m38s)  kubelet, m01  Node m01 status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     3m38s (x5 over 3m38s)  kubelet, m01  Node m01 status is now: NodeHasSufficientPID
Name:               vupgrade-20200323t124430.430384955-31589
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=vupgrade-20200323t124430.430384955-31589
kubernetes.io/os=linux
Annotations:        node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 23 Mar 2020 12:49:49 -0700
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
HolderIdentity:  <unset>
AcquireTime:     <unset>
RenewTime:       <unset>
Conditions:
Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
----             ------  -----------------                 ------------------                ------                       -------
MemoryPressure   False   Mon, 23 Mar 2020 12:49:49 -0700   Mon, 23 Mar 2020 12:49:49 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
DiskPressure     False   Mon, 23 Mar 2020 12:49:49 -0700   Mon, 23 Mar 2020 12:49:49 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
PIDPressure      False   Mon, 23 Mar 2020 12:49:49 -0700   Mon, 23 Mar 2020 12:49:49 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
Ready            False   Mon, 23 Mar 2020 12:49:49 -0700   Mon, 23 Mar 2020 12:49:49 -0700   KubeletNotReady              [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, CSINodeInfo is not yet initialized, missing node capacity for resources: ephemeral-storage]
Addresses:
InternalIP:  172.17.0.5
Hostname:    vupgrade-20200323t124430.430384955-31589
Capacity:
cpu:            4
hugepages-1Gi:  0
hugepages-2Mi:  0
memory:         15404700Ki
pods:           110
Allocatable:
cpu:            4
hugepages-1Gi:  0
hugepages-2Mi:  0
memory:         15302300Ki
pods:           110
System Info:
Machine ID:                 2317bc36fb094d059ff3ec70da0cda2a
System UUID:                3e5e8290-f475-444b-a223-d254e38d2017
Boot ID:                    a602e373-d3f5-4bf6-a793-1f86dc247ed3
Kernel Version:             4.9.0-12-amd64
OS Image:                   Ubuntu 19.10
Operating System:           linux
Architecture:               amd64
Container Runtime Version:  docker://19.3.2
Kubelet Version:            v1.18.0-rc.1
Kube-Proxy Version:         v1.18.0-rc.1
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (1 in total)
Namespace                   Name             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
---------                   ----             ------------  ----------  ---------------  -------------  ---
kube-system                 kindnet-xd6sm    100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      0s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource           Requests   Limits
--------           --------   ------
cpu                100m (2%)  100m (2%)
memory             50Mi (0%)  50Mi (0%)
ephemeral-storage  0 (0%)     0 (0%)
Events:
Type    Reason                   Age                  From                                                  Message
----    ------                   ----                 ----                                                  -------
Normal  Starting                 3m3s                 kube-proxy, vupgrade-20200323t124430.430384955-31589  Starting kube-proxy.
Normal  Starting                 113s                 kubelet, vupgrade-20200323t124430.430384955-31589     Starting kubelet.
Normal  NodeAllocatableEnforced  113s                 kubelet, vupgrade-20200323t124430.430384955-31589     Updated Node Allocatable limit across pods
Normal  NodeHasSufficientMemory  112s (x3 over 113s)  kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    112s (x4 over 113s)  kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     112s (x3 over 113s)  kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientPID
Normal  Starting                 62s                  kubelet, vupgrade-20200323t124430.430384955-31589     Starting kubelet.
Normal  NodeHasSufficientMemory  61s (x8 over 62s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    61s (x8 over 62s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     61s (x7 over 62s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientPID
Normal  NodeAllocatableEnforced  61s                  kubelet, vupgrade-20200323t124430.430384955-31589     Updated Node Allocatable limit across pods
Normal  Starting                 32s                  kubelet, vupgrade-20200323t124430.430384955-31589     Starting kubelet.
Normal  NodeAllocatableEnforced  31s                  kubelet, vupgrade-20200323t124430.430384955-31589     Updated Node Allocatable limit across pods
Normal  NodeHasSufficientMemory  30s (x8 over 32s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    30s (x8 over 32s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     30s (x7 over 32s)    kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientPID
Normal  Starting                 1s                   kubelet, vupgrade-20200323t124430.430384955-31589     Starting kubelet.
Normal  NodeHasSufficientMemory  0s                   kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientMemory
Normal  NodeHasNoDiskPressure    0s                   kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasNoDiskPressure
Normal  NodeHasSufficientPID     0s                   kubelet, vupgrade-20200323t124430.430384955-31589     Node vupgrade-20200323t124430.430384955-31589 status is now: NodeHasSufficientPID
helpers.go:221: <<< TestVersionUpgrade FAILED: end of post-mortem logs <<<
helpers.go:160: (dbg) Run:  out/minikube-linux-amd64 delete -p vupgrade-20200323T124430.430384955-31589
helpers.go:160: (dbg) Done: out/minikube-linux-amd64 delete -p vupgrade-20200323T124430.430384955-31589: (2.693726649s)
medyagh commented 4 years ago

I merged the PR for this but I believe we will still have this error till next release. Because latest stable release has bad stop code that was fixed in head

mansurali901 commented 2 years ago

One of the reason is sometimes systemctl holds the port for little longer so when docker tries to bind the port it gets failed so the best solution is to stop minikube cluster then stop docker hold for atleast 30 seconds and then start docker first and then start your minikube cluster it will solver the problem