Closed waldiirawan closed 3 years ago
@waldiirawan I am curious if the latest binary fixes the problem, it seems like you have a differeng cgroup ...
https://storage.googleapis.com/minikube/latest/minikube-linux-amd64 https://storage.googleapis.com/minikube-builds/master/minikube-linux-amd64
Hi @waldiirawan, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
Just in case somebody arrives here, worth checking if app armor is enabled or not:
sudo aa-status
I'm still troubleshooting the issue, but the only host running docker showing similar error message has app armor disabled.
Steps to reproduce the issue:
Full output of failed command: StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16: exit status 125 stdout: 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a
stderr: docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: resulting devices cgroup doesn't match target mode: unknown.
Full output of Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-02-17 18:04:56.11967023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:17179869184 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:devops.santuybro.io Labels:[] ExperimentalBuild:false ServerVersion:20.10.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.0-docker]] Warnings:}}
I0217 18:04:56.228605 25759 docker.go:147] overlay module found
I0217 18:04:56.228632 25759 global.go:110] docker priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I0217 18:04:56.287179 25759 global.go:110] kvm2 priority: 7, state: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:/bin/virsh domcapabilities --virttype kvm failed:
setlocale: No such file or directory
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory Fix:Follow your Linux distribution instructions for configuring KVM Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0217 18:04:56.315843 25759 global.go:110] none priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Fix: Doc:}
I0217 18:04:56.316010 25759 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0217 18:04:56.316061 25759 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0217 18:04:56.316101 25759 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0217 18:04:56.316133 25759 driver.go:240] not recommending "kvm2" due to health: /bin/virsh domcapabilities --virttype kvm failed:
setlocale: No such file or directory
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
I0217 18:04:56.316224 25759 driver.go:274] Picked: docker
I0217 18:04:56.316254 25759 driver.go:275] Alternatives: []
I0217 18:04:56.316274 25759 driver.go:276] Rejects: [kvm2 none podman virtualbox vmware]
I0217 18:04:56.317815 25759 out.go:119] ✨ Automatically selected the docker driver
✨ Automatically selected the docker driver
I0217 18:04:56.317857 25759 start.go:277] selected driver: docker
I0217 18:04:56.317870 25759 start.go:686] validating driver "docker" against
I0217 18:04:56.317890 25759 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I0217 18:04:56.317968 25759 cli_runner.go:111] Run: docker system info --format "{{json .}}"
I0217 18:04:56.491548 25759 info.go:253] docker info: {ID:KUSE:5EP6:OK54:EFW3:MM2S:UISD:OY52:74R3:ISU7:GNKG:UGRV:GKLY Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:34 SystemTime:2021-02-17 18:04:56.381305575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:17179869184 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:devops.santuybro.io Labels:[] ExperimentalBuild:false ServerVersion:20.10.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.0-docker]] Warnings:}}
I0217 18:04:56.491773 25759 start_flags.go:235] no existing cluster config was found, will generate one from the flags
I0217 18:04:56.496131 25759 start_flags.go:253] Using suggested 4000MB memory alloc based on sys=16384MB, container=16384MB
I0217 18:04:56.496429 25759 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I0217 18:04:56.496516 25759 cni.go:74] Creating CNI manager for ""
I0217 18:04:56.496548 25759 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
I0217 18:04:56.496572 25759 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] MultiNodeRequested:false}
I0217 18:04:56.501642 25759 out.go:119] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I0217 18:04:56.567660 25759 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I0217 18:04:56.567730 25759 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping pull
I0217 18:04:56.567777 25759 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0217 18:04:56.567841 25759 preload.go:105] Found local preload: /home/santuybro/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4
I0217 18:04:56.567864 25759 cache.go:54] Caching tarball of preloaded images
I0217 18:04:56.567894 25759 preload.go:131] Found /home/santuybro/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0217 18:04:56.567914 25759 cache.go:57] Finished verifying existence of preloaded tar for v1.20.0 on docker
I0217 18:04:56.568330 25759 profile.go:147] Saving config to /home/santuybro/.minikube/profiles/minikube/config.json ...
I0217 18:04:56.568386 25759 lock.go:36] WriteFile acquiring /home/santuybro/.minikube/profiles/minikube/config.json: {Name:mk5218f4c2c1d6107f82db01d1ab3112bcc50b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0217 18:04:56.568707 25759 cache.go:185] Successfully downloaded all kic artifacts
I0217 18:04:56.568766 25759 start.go:314] acquiring machines lock for minikube: {Name:mk1eff1968d43b4b1d0d11a951b6cd70792c90a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0217 18:04:56.568843 25759 start.go:318] acquired machines lock for "minikube" in 52.38µs
I0217 18:04:56.568879 25759 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0217 18:04:56.568964 25759 start.go:127] createHost starting for "" (driver="docker")
I0217 18:04:56.572674 25759 out.go:119] 🔥 Creating docker container (CPUs=2, Memory=4000MB) ...
🔥 Creating docker container (CPUs=2, Memory=4000MB) ...
I0217 18:04:56.572954 25759 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I0217 18:04:56.573014 25759 client.go:165] LocalClient.Create starting
I0217 18:04:56.573064 25759 main.go:119] libmachine: Reading certificate data from /home/santuybro/.minikube/certs/ca.pem
I0217 18:04:56.573117 25759 main.go:119] libmachine: Decoding PEM data...
I0217 18:04:56.573161 25759 main.go:119] libmachine: Parsing certificate...
I0217 18:04:56.573354 25759 main.go:119] libmachine: Reading certificate data from /home/santuybro/.minikube/certs/cert.pem
I0217 18:04:56.573394 25759 main.go:119] libmachine: Decoding PEM data...
I0217 18:04:56.573480 25759 main.go:119] libmachine: Parsing certificate...
I0217 18:04:56.573840 25759 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
W0217 18:04:56.652658 25759 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1
I0217 18:04:56.652975 25759 network_create.go:235] running [docker network inspect minikube] to gather additional debugging logs...
I0217 18:04:56.652999 25759 cli_runner.go:111] Run: docker network inspect minikube
W0217 18:04:56.721861 25759 cli_runner.go:149] docker network inspect minikube returned with exit code 1
I0217 18:04:56.721948 25759 network_create.go:238] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
minikube start
command used, if not already included: I0217 18:04:55.940054 25759 out.go:221] Setting OutFile to fd 1 ... I0217 18:04:55.940279 25759 out.go:273] isatty.IsTerminal(1) = true I0217 18:04:55.940297 25759 out.go:234] Setting ErrFile to fd 2... I0217 18:04:55.940350 25759 out.go:273] isatty.IsTerminal(2) = true I0217 18:04:55.940557 25759 root.go:280] Updating PATH: /home/santuybro/.minikube/bin I0217 18:04:55.940883 25759 out.go:228] Setting JSON to false I0217 18:04:55.966642 25759 start.go:104] hostinfo: {"hostname":"devops.santuybro.io","uptime":5134,"bootTime":1613579961,"procs":87,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.9.2009","kernelVersion":"3.10.0","virtualizationSystem":"openvz","virtualizationRole":"guest","hostid":"aa620bc3-593d-4003-8027-7b3470b66d6b"} I0217 18:04:55.967500 25759 start.go:114] virtualization: openvz guest I0217 18:04:55.969492 25759 out.go:119] 😄 minikube v1.16.0 on Centos 7.9.2009 (openvz/amd64) 😄 minikube v1.16.0 on Centos 7.9.2009 (openvz/amd64) I0217 18:04:55.969704 25759 driver.go:303] Setting default libvirt URI to qemu:///system I0217 18:04:55.969756 25759 global.go:102] Querying for installed drivers using PATH=/home/santuybro/.minikube/bin:/sbin:/bin:/usr/sbin:/usr/bin I0217 18:04:56.057843 25759 docker.go:117] docker version: linux-20.10.1 I0217 18:04:56.057974 25759 cli_runner.go:111] Run: docker system info --format "{{json .}}" I0217 18:04:56.228420 25759 info.go:253] docker info: {ID:KUSE:5EP6:OK54:EFW3:MM2S:UISD:OY52:74R3:ISU7:GNKG:UGRV:GKLY Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:stderr: Error: No such network: minikube I0217 18:04:56.722045 25759 network_create.go:240] output of [docker network inspect minikube]: -- stdout -- []
-- /stdout -- stderr Error: No such network: minikube
/stderr I0217 18:04:56.722135 25759 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" I0217 18:04:56.790450 25759 network_create.go:100] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I0217 18:04:56.790594 25759 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0217 18:04:57.060528 25759 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container I0217 18:04:57.060624 25759 cli_runner.go:111] Run: docker ps -a --format {{.Names}} I0217 18:04:57.128483 25759 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0217 18:04:57.195104 25759 oci.go:102] Successfully created a docker volume minikube I0217 18:04:57.195202 25759 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -d /var/lib I0217 18:04:58.589311 25759 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -d /var/lib: (1.394039545s) I0217 18:04:58.589355 25759 oci.go:106] Successfully prepared a docker volume minikube I0217 18:04:58.589459 25759 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker I0217 18:04:58.589468 25759 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'" I0217 18:04:58.589522 25759 preload.go:105] Found local preload: /home/santuybro/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4 I0217 18:04:58.589532 25759 kic.go:159] Starting extracting preloaded images to volume ... I0217 18:04:58.589583 25759 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/santuybro/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir I0217 18:04:58.767163 25759 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 W0217 18:05:00.740479 25759 cli_runner.go:149] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 returned with exit code 125 I0217 18:05:00.740539 25759 cli_runner.go:155] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16: (1.973220842s) I0217 18:05:00.740669 25759 client.go:168] LocalClient.Create took 4.167633411s I0217 18:05:02.741111 25759 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0217 18:05:02.741252 25759 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0217 18:05:02.869743 25759 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 I0217 18:05:02.870010 25759 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil I0217 18:05:03.146576 25759 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0217 18:05:03.274597 25759 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 I0217 18:05:03.274808 25759 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil I0217 18:05:03.815368 25759 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0217 18:05:03.896027 25759 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 I0217 18:05:03.896203 25759 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil I0217 18:05:04.551557 25759 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0217 18:05:04.630631 25759 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 I0217 18:05:04.630811 25759 retry.go:31] will retry after 791.196345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil I0217 18:05:05.422316 25759 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0217 18:05:05.532181 25759 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 W0217 18:05:05.532322 25759 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
W0217 18:05:05.532381 25759 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil I0217 18:05:05.532472 25759 start.go:130] duration metric: createHost completed in 8.963489834s I0217 18:05:05.532490 25759 start.go:81] releasing machines lock for "minikube", held for 8.963621462s W0217 18:05:05.532527 25759 start.go:377] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16: exit status 125 stdout: 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a
stderr: docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: resulting devices cgroup doesn't match target mode: unknown. I0217 18:05:05.533045 25759 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} W0217 18:05:05.605992 25759 start.go:382] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. W0217 18:05:05.606275 25759 out.go:181] 🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16: exit status 125 stdout: 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a
stderr: docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: resulting devices cgroup doesn't match target mode: unknown.
🤦 StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16: exit status 125 stdout: 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a
stderr: docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: resulting devices cgroup doesn't match target mode: unknown.
I0217 18:05:05.606390 25759 start.go:392] Will try again in 5 seconds ... I0217 18:05:10.606637 25759 start.go:314] acquiring machines lock for minikube: {Name:mk1eff1968d43b4b1d0d11a951b6cd70792c90a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0217 18:05:10.606872 25759 start.go:318] acquired machines lock for "minikube" in 114.176µs
I0217 18:05:10.606930 25759 start.go:94] Skipping create...Using existing machine configuration
I0217 18:05:10.606969 25759 fix.go:54] fixHost starting:
I0217 18:05:10.607322 25759 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0217 18:05:10.677801 25759 fix.go:107] recreateIfNeeded on minikube: state= err=
I0217 18:05:10.677902 25759 fix.go:112] machineExists: false. err=machine does not exist
I0217 18:05:10.680796 25759 out.go:119] 🤷 docker "minikube" container is missing, will recreate.
🤷 docker "minikube" container is missing, will recreate.
I0217 18:05:10.680873 25759 delete.go:124] DEMOLISHING minikube ...
I0217 18:05:10.680952 25759 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0217 18:05:10.762703 25759 stop.go:79] host is in state
I0217 18:05:10.762810 25759 main.go:119] libmachine: Stopping "minikube"...
I0217 18:05:10.762876 25759 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0217 18:05:10.829508 25759 kic_runner.go:93] Run: systemctl --version
I0217 18:05:10.829532 25759 kic_runner.go:114] Args: [docker exec --privileged minikube systemctl --version]
I0217 18:05:10.903085 25759 kic_runner.go:93] Run: sudo service kubelet stop
I0217 18:05:10.903149 25759 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop]
I0217 18:05:10.978665 25759 openrc.go:141] stop output:
stderr
Error response from daemon: Container 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a is not running
/stderr W0217 18:05:10.978740 25759 kic.go:417] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1 stdout:
stderr: Error response from daemon: Container 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a is not running I0217 18:05:10.978821 25759 kic_runner.go:93] Run: sudo service kubelet stop I0217 18:05:10.978845 25759 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop] I0217 18:05:11.051879 25759 openrc.go:141] stop output: stderr Error response from daemon: Container 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a is not running
/stderr W0217 18:05:11.051910 25759 kic.go:419] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1 stdout:
stderr: Error response from daemon: Container 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a is not running I0217 18:05:11.051990 25759 kicrunner.go:93] Run: docker ps -a --filter=name=k8s.(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}} I0217 18:05:11.052002 25759 kicrunner.go:114] Args: [docker exec --privileged minikube docker ps -a --filter=name=k8s.(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}}] I0217 18:05:11.123691 25759 kic.go:430] unable list containers : docker: docker ps -a --filter=name=k8s.*(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1 stdout:
stderr: Error response from daemon: Container 85adde0f1c5c69361a795b6c4a31c5fa476d06bcc1a4319f8cfbadb756b61e8a is not running I0217 18:05:11.123777 25759 kic.go:440] successfully stopped kubernetes! I0217 18:05:11.123835 25759 kic_runner.go:93] Run: pgrep kube-apiserver I0217 18:05:11.123854 25759 kic_runner.go:114] Args: [docker exec --privileged minikube pgrep kube-apiserver] I0217 18:05:11.154387 25759 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/santuybro/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir: (12.56473864s) I0217 18:05:11.154511 25759 kic.go:168] duration metric: took 12.564976 seconds to extract preloaded images to volume
My lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 0 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz Stepping: 7 CPU MHz: 2699.853 BogoMIPS: 4200.00 Virtualization: VT-x Hypervisor vendor: Parallels Virtualization type: container Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu cpuid_faulting pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities