Closed Lavie526 closed 3 years ago
It seems the minikube start successfully, however while use kubectl get nodes, its status is "NotReady", also after apply a deployment, the pod status is alwasy pending. Ahy suggestions?
This particular state error is interesting:
Ready False Wed, 22 Jul 2020 00:48:46 +0000 Wed, 22 Jul 2020 00:47:31 +0000 KubeletNotReady container runtime status check may not have completed yet
As well as these (possible red herring):
[Jul21 23:37] systemd-fstab-generator[38578]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
this points us toward the root cause of why kubelet can't schedule:
Jul 22 00:48:53 minikube kubelet[5678]: E0722 00:48:53.923204 5678 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
and finally the biggest red flag:
Jul 22 00:48:54 minikube kubelet[5678]: F0722 00:48:54.225979 5678 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 22 00:48:54 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Jul 22 00:48:54 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
I think there may be something that we need to do with the Docker configuration on your host to make it compatible with running Kubernetes.
Can you try minikube start --force-systemd
? Based on https://github.com/kubernetes/kubernetes/issues/43856 I think it may help.
@tstromberg there is no --force-systemd option for start command
[jiekong@den03fyu ~]$ minikube start --force-systemd Error: unknown flag: --force-systemd See 'minikube start --help' for usage.
OK. Please upgrade to the latest version of minikube then. This problem was either fixed, or this flag should fix it:
% minikube start --force-systemd
๐ minikube v1.12.1 on Darwin 10.15.5
โจ Using the docker driver based on existing profile
๐ Starting control plane node minikube in cluster minikube
๐ Updating the running docker "minikube" container .
I got confused because your bug report has both output from minikube v1.12.0 and v1.9.1.
yes, after upgrade it could run minikube start --force-systemd, however it still failed. I usually installed v1.9.1 and also v0.35.0 in my vm, it works before. However after try to install new version, it not works. Is it due to not clean up? I run minikube delete --all before install new version and each time run minikube start. I also delete both the .minikube and .kube folder. Is there anything else i need to do? Is there any details steps about the steps, especially the docker setup and configure before minikube start with docker driver?
@tstromberg Any suggestions?
@tstromberg Any suggestions?
What sort of failure are you seeing with the --force-systemd
flag specified?
Hey @Lavie526 are you still seeing this issue?
On your host, do you mind sharing the output of:
grep pids /proc/cgroups
I suspect it may be missing.
Hi @Lavie526 , I haven't heard back from you, I wonder if you still have this issue? Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information.
Steps to reproduce the issue:
1.minikube start --vm-driver=docker
3.kubectl get nodes NAME STATUS ROLES AGE VERSION minikube NotReady master 2m35s v1.18.3
Full output of failed command: [jiekong@den03fyu ~]$ minikube start --driver=docker --alsologtostderr I0721 17:46:33.750189 37623 start.go:261] hostinfo: {"hostname":"den03fyu","uptime":1096853,"bootTime":1594281940,"procs":385,"os":"linux","platform":"oracle","platformFamily":"rhel","platformVersion":"7.4","kernelVersion":"4.1.12-124.39.5.1.el7uek.x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostid":"502e3f0d-c118-48cd-ad65-83be8d0cb82f"} I0721 17:46:33.751105 37623 start.go:271] virtualization: xen guest ๐ minikube v1.9.1 on Oracle 7.4 (xen/amd64) โช KUBECONFIG=/scratch/jiekong/.kube/config โช MINIKUBE_HOME=/scratch/jiekong I0721 17:46:33.753667 37623 driver.go:246] Setting default libvirt URI to qemu:///system โจ Using the docker driver based on user configuration I0721 17:46:33.870056 37623 start.go:309] selected driver: docker I0721 17:46:33.870105 37623 start.go:655] validating driver "docker" against
I0721 17:46:33.870126 37623 start.go:661] status for docker: {Installed:true Healthy:true Error: Fix: Doc:}
I0721 17:46:33.870159 37623 start.go:1098] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0721 17:46:33.978964 37623 start.go:1003] Using suggested 14600MB memory alloc based on sys=58702MB, container=58702MB
๐ Starting control plane node m01 in cluster minikube
๐ Pulling base image ...
I0721 17:46:33.980817 37623 cache.go:104] Beginning downloading kic artifacts
I0721 17:46:33.980857 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0721 17:46:33.980898 37623 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0721 17:46:33.980927 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0721 17:46:33.981062 37623 cache.go:46] Caching tarball of preloaded images
I0721 17:46:33.981093 37623 preload.go:123] Found /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0721 17:46:33.981109 37623 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker
I0721 17:46:33.981029 37623 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0721 17:46:33.981392 37623 profile.go:138] Saving config to /scratch/jiekong/.minikube/profiles/minikube/config.json ...
I0721 17:46:33.981530 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/config.json: {Name:mkeb6d736586eadd60342788b13e7e9947272373 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:34.075405 37623 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 in local docker daemon, skipping pull
I0721 17:46:34.075649 37623 cache.go:117] Successfully downloaded all kic artifacts
I0721 17:46:34.075806 37623 start.go:260] acquiring machines lock for minikube: {Name:mkc0391c2630d5de37a791bd924e47ce04943c1a Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0721 17:46:34.076082 37623 start.go:264] acquired machines lock for "minikube" in 137.749ยตs
I0721 17:46:34.076221 37623 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0721 17:46:34.076465 37623 start.go:107] createHost starting for "m01" (driver="docker")
๐ฅ Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=14600MB (58702MB available) ...
I0721 17:46:34.193858 37623 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0721 17:46:34.193921 37623 client.go:169] LocalClient.Create starting
I0721 17:46:34.194024 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/ca.pem
I0721 17:46:34.194088 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:34.194121 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:34.194304 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/cert.pem
I0721 17:46:34.194356 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:34.194380 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:34.194900 37623 oci.go:245] executing with [docker ps -a --format {{.Names}}] timeout: 15s
I0721 17:46:34.252062 37623 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0721 17:46:34.307297 37623 oci.go:128] Successfully created a docker volume minikube
I0721 17:46:35.082249 37623 oci.go:245] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 15s
I0721 17:46:35.148292 37623 oci.go:160] the created container "minikube" has a running status.
I0721 17:46:35.148638 37623 kic.go:142] Creating ssh key for kic: /scratch/jiekong/.minikube/machines/minikube/id_rsa...
I0721 17:46:35.704764 37623 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0721 17:46:35.763348 37623 client.go:172] LocalClient.Create took 1.569388902s
I0721 17:46:37.763915 37623 start.go:110] createHost completed in 3.687316658s
I0721 17:46:37.764235 37623 start.go:77] releasing machines lock for "minikube", held for 3.688032819s
๐คฆ StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: apply authorized_keys file ownership, output
stderr
Error response from daemon: Container 493c72f4fe54225f2fd2c660e11937cd756828923103f70312537e30d9035daf is not running
/stderr : chown docker:docker /home/docker/.ssh/authorized_keys: exit status 1 stdout:
stderr: Error response from daemon: Container 493c72f4fe54225f2fd2c660e11937cd756828923103f70312537e30d9035daf is not running
I0721 17:46:37.765970 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s ๐ฅ Deleting "minikube" in docker ... I0721 17:46:42.967743 37623 start.go:260] acquiring machines lock for minikube: {Name:mkc0391c2630d5de37a791bd924e47ce04943c1a Clock:{} Delay:500ms Timeout:15m0s Cancel:}
I0721 17:46:42.968268 37623 start.go:264] acquired machines lock for "minikube" in 192.263ยตs
I0721 17:46:42.968485 37623 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0721 17:46:42.968803 37623 start.go:107] createHost starting for "m01" (driver="docker")
๐ฅ Creating Kubernetes in docker container with (CPUs=2) (16 available), Memory=14600MB (58702MB available) ...
I0721 17:46:43.088972 37623 start.go:143] libmachine.API.Create for "minikube" (driver="docker")
I0721 17:46:43.089076 37623 client.go:169] LocalClient.Create starting
I0721 17:46:43.089170 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/ca.pem
I0721 17:46:43.089240 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:43.089287 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:43.089488 37623 main.go:110] libmachine: Reading certificate data from /scratch/jiekong/.minikube/certs/cert.pem
I0721 17:46:43.089573 37623 main.go:110] libmachine: Decoding PEM data...
I0721 17:46:43.089606 37623 main.go:110] libmachine: Parsing certificate...
I0721 17:46:43.089901 37623 oci.go:245] executing with [docker ps -a --format {{.Names}}] timeout: 15s
I0721 17:46:43.143202 37623 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0721 17:46:43.195154 37623 oci.go:128] Successfully created a docker volume minikube
I0721 17:46:43.841737 37623 oci.go:245] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 15s
I0721 17:46:43.913242 37623 oci.go:160] the created container "minikube" has a running status.
I0721 17:46:43.913428 37623 kic.go:142] Creating ssh key for kic: /scratch/jiekong/.minikube/machines/minikube/id_rsa...
I0721 17:46:44.507946 37623 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0721 17:46:44.716634 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0721 17:46:44.716990 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0721 17:46:44.717218 37623 kic.go:128] Starting extracting preloaded images to volume
I0721 17:46:44.717486 37623 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0721 17:46:50.046119 37623 kic.go:133] Took 5.328921 seconds to extract preloaded images to volume
I0721 17:46:50.046360 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:46:50.104844 37623 machine.go:86] provisioning docker machine ...
I0721 17:46:50.105023 37623 ubuntu.go:166] provisioning hostname "minikube"
I0721 17:46:50.161118 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:50.161526 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:50.161719 37623 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0721 17:46:50.297031 37623 main.go:110] libmachine: SSH cmd err, output: : minikube
I0721 17:46:50.354730 37623 main.go:110] libmachine: Using SSH client type: native I0721 17:46:50.355279 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:50.355503 37623 main.go:110] libmachine: About to run SSH command:
I0721 17:46:50.466315 37623 main.go:110] libmachine: SSH cmd err, output::
I0721 17:46:50.466476 37623 ubuntu.go:172] set auth options {CertDir:/scratch/jiekong/.minikube CaCertPath:/scratch/jiekong/.minikube/certs/ca.pem CaPrivateKeyPath:/scratch/jiekong/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/scratch/jiekong/.minikube/machines/server.pem ServerKeyPath:/scratch/jiekong/.minikube/machines/server-key.pem ClientKeyPath:/scratch/jiekong/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/scratch/jiekong/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/scratch/jiekong/.minikube}
I0721 17:46:50.466662 37623 ubuntu.go:174] setting up certificates
I0721 17:46:50.466932 37623 provision.go:83] configureAuth start
I0721 17:46:50.543308 37623 provision.go:132] copyHostCerts
I0721 17:46:50.544049 37623 provision.go:106] generating server cert: /scratch/jiekong/.minikube/machines/server.pem ca-key=/scratch/jiekong/.minikube/certs/ca.pem private-key=/scratch/jiekong/.minikube/certs/ca-key.pem org=jiekong.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0721 17:46:50.988368 37623 provision.go:160] copyRemoteCerts
I0721 17:46:51.081778 37623 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0721 17:46:51.136987 37623 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0721 17:46:51.137497 37623 ssh_runner.go:174] Transferring 1038 bytes to /etc/docker/ca.pem
I0721 17:46:51.138772 37623 ssh_runner.go:193] ca.pem: copied 1038 bytes
I0721 17:46:51.163148 37623 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0721 17:46:51.163755 37623 ssh_runner.go:174] Transferring 1123 bytes to /etc/docker/server.pem
I0721 17:46:51.164840 37623 ssh_runner.go:193] server.pem: copied 1123 bytes
I0721 17:46:51.189267 37623 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0721 17:46:51.189752 37623 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0721 17:46:51.190490 37623 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0721 17:46:51.212222 37623 provision.go:86] configureAuth took 745.1519ms
I0721 17:46:51.212350 37623 ubuntu.go:190] setting minikube options for container-runtime
I0721 17:46:51.270024 37623 main.go:110] libmachine: Using SSH client type: native
I0721 17:46:51.270463 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.270624 37623 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0721 17:46:51.384286 37623 main.go:110] libmachine: SSH cmd err, output: : overlay
I0721 17:46:51.384328 37623 ubuntu.go:71] root file system type: overlay I0721 17:46:51.384531 37623 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ... I0721 17:46:51.443643 37623 main.go:110] libmachine: Using SSH client type: native I0721 17:46:51.444018 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.444235 37623 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service] Type=notify
Environment="NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2" Environment="HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/" Environment="HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/" Environment="NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3"
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0721 17:46:51.567443 37623 main.go:110] libmachine: SSH cmd err, output:: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service] Type=notify
Environment=NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 Environment=HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ Environment=HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ Environment=NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install] WantedBy=multi-user.target
I0721 17:46:51.629055 37623 main.go:110] libmachine: Using SSH client type: native I0721 17:46:51.629298 37623 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 9023 }
I0721 17:46:51.629380 37623 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0721 17:46:52.145149 37623 main.go:110] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2020-07-22 00:46:51.564843610 +0000
@@ -8,24 +8,26 @@
[Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +Environment=NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 +Environment=HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ +Environment=HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ +Environment=NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +35,10 @@ LimitNPROC=infinity LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
I0721 17:46:52.145225 37623 machine.go:89] provisioned docker machine in 2.040237341s I0721 17:46:52.145239 37623 client.go:172] LocalClient.Create took 9.056130529s I0721 17:46:52.145253 37623 start.go:148] libmachine.API.Create for "minikube" took 9.056292084s I0721 17:46:52.145263 37623 start.go:189] post-start starting for "minikube" (driver="docker") I0721 17:46:52.145281 37623 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0721 17:46:52.145298 37623 start.go:234] Returning KICRunner for "docker" driver I0721 17:46:52.145447 37623 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0721 17:46:52.339777 37623 filesync.go:118] Scanning /scratch/jiekong/.minikube/addons for local assets ... I0721 17:46:52.340130 37623 filesync.go:118] Scanning /scratch/jiekong/.minikube/files for local assets ... I0721 17:46:52.340312 37623 start.go:192] post-start completed in 195.03773ms I0721 17:46:52.341092 37623 start.go:110] createHost completed in 9.37208076s I0721 17:46:52.341204 37623 start.go:77] releasing machines lock for "minikube", held for 9.37274675s ๐ Found network options: โช NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 โช http_proxy=http://www-proxy-brmdc.us.*.com:80/ โช https_proxy=http://www-proxy-brmdc.us.*.com:80/ โช no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3 I0721 17:46:52.409980 37623 profile.go:138] Saving config to /scratch/jiekong/.minikube/profiles/minikube/config.json ... I0721 17:46:52.410241 37623 kic_runner.go:91] Run: curl -sS -m 2 https://k8s.gcr.io/ I0721 17:46:52.410683 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd I0721 17:46:52.738320 37623 kic_runner.go:91] Run: sudo systemctl stop -f containerd I0721 17:46:53.038502 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd I0721 17:46:53.291921 37623 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio I0721 17:46:53.533211 37623 kic_runner.go:91] Run: sudo systemctl start docker W0721 17:46:53.827858 37623 start.go:430] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: exit status 7 stdout:
stderr: curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out โ This container is having trouble accessing https://k8s.gcr.io ๐ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0721 17:46:54.194748 37623 kic_runner.go:91] Run: docker version --format {{.Server.Version}} ๐ณ Preparing Kubernetes v1.18.0 on Docker 19.03.2 ... โช env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 โช env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ โช env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3 โช kubeadm.pod-network-cidr=10.244.0.0/16 I0721 17:46:54.519685 37623 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0721 17:46:54.519732 37623 preload.go:97] Found local preload: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0721 17:46:54.519860 37623 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}} I0721 17:46:54.832933 37623 docker.go:367] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 kubernetesui/dashboard:v2.0.0-rc6 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 kindest/kindnetd:0.5.3 k8s.gcr.io/etcd:3.4.3-0 kubernetesui/metrics-scraper:v1.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout -- I0721 17:46:54.833353 37623 docker.go:305] Images already preloaded, skipping extraction I0721 17:46:54.833561 37623 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}} I0721 17:46:55.157820 37623 docker.go:367] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 kubernetesui/dashboard:v2.0.0-rc6 k8s.gcr.io/pause:3.2 k8s.gcr.io/coredns:1.6.7 kindest/kindnetd:0.5.3 k8s.gcr.io/etcd:3.4.3-0 kubernetesui/metrics-scraper:v1.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout -- I0721 17:46:55.159086 37623 cache_images.go:69] Images are preloaded, skipping loading I0721 17:46:55.159243 37623 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:172.17.0.2} I0721 17:46:55.159489 37623 kubeadm.go:129] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.17.0.2 bindPort: 8443 bootstrapTokens:
authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 172.17.0.2 taints: []
apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "172.17.0.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: 172.17.0.2:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration
disable disk resource management by default
imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%"
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration metricsBindAddress: 172.17.0.2:10249
I0721 17:46:55.160738 37623 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}} I0721 17:46:55.456338 37623 kubeadm.go:649] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests
[Install] config: {KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} I0721 17:46:55.457267 37623 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0 I0721 17:46:55.710905 37623 binaries.go:42] Found k8s binaries, skipping transfer I0721 17:46:55.711280 37623 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system I0721 17:46:56.893023 37623 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new" I0721 17:46:57.099916 37623 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet" I0721 17:46:57.452146 37623 certs.go:51] Setting up /scratch/jiekong/.minikube/profiles/minikube for IP: 172.17.0.2 I0721 17:46:57.452676 37623 certs.go:169] skipping minikubeCA CA generation: /scratch/jiekong/.minikube/ca.key I0721 17:46:57.452995 37623 certs.go:169] skipping proxyClientCA CA generation: /scratch/jiekong/.minikube/proxy-client-ca.key I0721 17:46:57.453238 37623 certs.go:267] generating minikube-user signed cert: /scratch/jiekong/.minikube/profiles/minikube/client.key I0721 17:46:57.453386 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/client.crt with IP's: [] I0721 17:46:57.613530 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/client.crt ... I0721 17:46:57.613626 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.crt: {Name:mk102f7d86706185740d9bc9a57fc1d55716aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.613862 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/client.key ...
I0721 17:46:57.613895 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.key: {Name:mkef0a0f26fc07209d23f79940d16c45455b63f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.614058 37623 certs.go:267] generating minikube signed cert: /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411
I0721 17:46:57.614102 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0721 17:46:57.850254 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 ...
I0721 17:46:57.850325 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411: {Name:mk723c191d10c2ebe7f83ef10c6921ca6c302446 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.850664 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411 ...
I0721 17:46:57.850695 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411: {Name:mk68405f7f632b1f5980112bc4deb27222ae4de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:57.850813 37623 certs.go:278] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.eaa33411 -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt
I0721 17:46:57.850931 37623 certs.go:282] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.eaa33411 -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.key
I0721 17:46:57.851071 37623 certs.go:267] generating aggregator signed cert: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key
I0721 17:46:57.851097 37623 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0721 17:46:58.087194 37623 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt ...
I0721 17:46:58.087273 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd86cf3f7172f909cc9174e9befa523ad3f3568 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:58.087644 37623 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key ...
I0721 17:46:58.087683 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key: {Name:mk86f427bfbc5f46a12e1a6ff48f5514472dcc9b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:46:58.088100 37623 certs.go:330] found cert: ca-key.pem (1679 bytes)
I0721 17:46:58.088214 37623 certs.go:330] found cert: ca.pem (1038 bytes)
I0721 17:46:58.088271 37623 certs.go:330] found cert: cert.pem (1078 bytes)
I0721 17:46:58.088344 37623 certs.go:330] found cert: key.pem (1679 bytes)
I0721 17:46:58.089648 37623 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt
I0721 17:46:58.421966 37623 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key
I0721 17:46:58.691424 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt
I0721 17:46:59.005729 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key
I0721 17:46:59.308777 37623 certs.go:120] copying: /var/lib/minikube/certs/ca.crt
I0721 17:46:59.607846 37623 certs.go:120] copying: /var/lib/minikube/certs/ca.key
I0721 17:46:59.871866 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt
I0721 17:47:00.150709 37623 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key
I0721 17:47:00.446862 37623 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:00.730471 37623 certs.go:120] copying: /var/lib/minikube/kubeconfig
I0721 17:47:01.036943 37623 kic_runner.go:91] Run: openssl version
I0721 17:47:01.235432 37623 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0721 17:47:01.506692 37623 kic_runner.go:91] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:01.750180 37623 certs.go:370] hashing: -rw-r--r-- 1 root root 1066 Jul 21 09:00 /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:01.750686 37623 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0721 17:47:02.050692 37623 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0721 17:47:02.330389 37623 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:14600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.2 HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/ HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/ NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]}
I0721 17:47:02.330974 37623 kicrunner.go:91] Run: docker ps --filter status=paused --filter=name=k8s.(kube-system) --format={{.ID}}
I0721 17:47:02.648189 37623 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0721 17:47:02.914215 37623 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0721 17:47:03.167207 37623 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0721 17:47:03.167722 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0721 17:47:03.439826 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0721 17:47:03.709049 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0721 17:47:03.968721 37623 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0721 17:47:04.226758 37623 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0721 17:47:25.204124 37623 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: (20.977117049s)
I0721 17:47:25.204593 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -
I0721 17:47:25.806501 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.9.1 minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_21T17_47_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0721 17:47:26.187492 37623 kic_runner.go:91] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0721 17:47:26.452060 37623 ops.go:35] apiserver oom_adj: -16
I0721 17:47:26.452480 37623 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0721 17:47:26.807505 37623 kubeadm.go:772] duration metric: took 355.179157ms to wait for elevateKubeSystemPrivileges.
I0721 17:47:26.807798 37623 kubeadm.go:280] StartCluster complete in 24.477425583s
I0721 17:47:26.807972 37623 settings.go:123] acquiring lock: {Name:mk6f220c874ab31ad6cc0cf9a6c90f7ab17dd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:47:26.808240 37623 settings.go:131] Updating kubeconfig: /scratch/jiekong/.kube/config
I0721 17:47:26.809659 37623 lock.go:35] WriteFile acquiring /scratch/jiekong/.kube/config: {Name:mk262b9661e6e96133150ac3387d626503976a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0721 17:47:26.810084 37623 addons.go:280] enableAddons start: toEnable=map[], additional=[]
๐ Enabling addons: default-storageclass, storage-provisioner
I0721 17:47:26.813103 37623 addons.go:45] Setting default-storageclass=true in profile "minikube"
I0721 17:47:26.813258 37623 addons.go:230] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0721 17:47:26.816596 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:26.892845 37623 addons.go:104] Setting addon default-storageclass=true in "minikube"
W0721 17:47:26.893250 37623 addons.go:119] addon default-storageclass should already be in state true
I0721 17:47:26.893406 37623 host.go:65] Checking if "minikube" exists ...
I0721 17:47:26.895118 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:26.957651 37623 addons.go:197] installing /etc/kubernetes/addons/storageclass.yaml
I0721 17:47:27.249982 37623 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0721 17:47:27.633954 37623 addons.go:70] Writing out "minikube" config to set default-storageclass=true...
I0721 17:47:27.634580 37623 addons.go:45] Setting storage-provisioner=true in profile "minikube"
I0721 17:47:27.634938 37623 addons.go:104] Setting addon storage-provisioner=true in "minikube"
W0721 17:47:27.635181 37623 addons.go:119] addon storage-provisioner should already be in state true
I0721 17:47:27.635285 37623 host.go:65] Checking if "minikube" exists ...
I0721 17:47:27.635866 37623 oci.go:245] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 15s
I0721 17:47:27.697770 37623 addons.go:197] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0721 17:47:28.018139 37623 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0721 17:47:28.445751 37623 addons.go:70] Writing out "minikube" config to set storage-provisioner=true...
I0721 17:47:28.446231 37623 addons.go:282] enableAddons completed in 1.636145178s
I0721 17:47:28.446437 37623 kverify.go:52] waiting for apiserver process to appear ...
I0721 17:47:28.446666 37623 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver. minikube.*
I0721 17:47:28.692657 37623 kverify.go:72] duration metric: took 246.213612ms to wait for apiserver process to appear ...
I0721 17:47:28.695462 37623 kverify.go:187] waiting for apiserver healthz status ...
I0721 17:47:28.695686 37623 kverify.go:298] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0721 17:47:28.703982 37623 kverify.go:240] control plane version: v1.18.0
I0721 17:47:28.704048 37623 kverify.go:230] duration metric: took 8.369999ms to wait for apiserver health ...
I0721 17:47:28.704067 37623 kverify.go:150] waiting for kube-system pods to appear ...
I0721 17:47:28.714575 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:28.714636 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:29.217496 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:29.217621 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:29.717842 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:29.717929 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:30.218799 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:30.219228 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:30.717984 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:30.718085 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:31.217780 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:31.218100 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:31.720286 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:31.720347 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:32.219323 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:32.219404 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:32.718483 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:32.718754 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:33.217975 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:33.218198 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:33.725091 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:33.725322 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:34.218670 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:34.218978 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:34.718344 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:34.718396 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:35.218070 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:35.218270 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:35.717874 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:35.717912 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:36.217797 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:36.217836 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:36.718737 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:36.718970 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:37.217375 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:37.217605 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:37.719767 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:37.719814 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:38.217994 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:38.218255 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:38.717472 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:38.717714 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:39.218353 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:39.218646 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:39.718128 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:39.718383 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:40.217931 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:40.218158 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:40.718948 37623 kverify.go:168] 1 kube-system pods found
I0721 17:47:40.719036 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220322 37623 kverify.go:168] 5 kube-system pods found
I0721 17:47:41.220417 37623 kverify.go:170] "coredns-66bff467f8-pxvdv" [979340b0-81c5-4619-979a-d35ecc7076f8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220447 37623 kverify.go:170] "coredns-66bff467f8-xx8pk" [caed6688-147c-448c-9cce-847f585bfb9b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220474 37623 kverify.go:170] "kindnet-gw9vd" [6fdae753-79bf-4de7-b041-883386a80c8b] Pending
I0721 17:47:41.220497 37623 kverify.go:170] "kube-proxy-pvzpl" [5b3ada32-95db-4d44-b556-0ad3bb486004] Pending
I0721 17:47:41.220563 37623 kverify.go:170] "storage-provisioner" [5f8f885c-29c8-4168-812a-b32e6f8b4b4a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0721 17:47:41.220595 37623 kverify.go:181] duration metric: took 12.516496038s to wait for pod list to return data ...
๐ Done! kubectl is now configured to use "minikube"
I0721 17:47:41.289129 37623 start.go:453] kubectl: 1.18.6, cluster: 1.18.0 (minor skew: 0)
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command: