kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

minikube - redhat 7 - old docker : Template parsing error: template: :1: unexpected "=" in operand #10333

Closed AlexDag closed 3 years ago

AlexDag commented 3 years ago

Steps to reproduce the issue:

1.minikube start --cpus=4 --memory=8192 --alsologtostderr -v=1

Full output of failed command: I0201 18:00:10.214191 307281 out.go:229] Setting OutFile to fd 1 ... I0201 18:00:10.214691 307281 out.go:281] isatty.IsTerminal(1) = true I0201 18:00:10.214704 307281 out.go:242] Setting ErrFile to fd 2... I0201 18:00:10.214712 307281 out.go:281] isatty.IsTerminal(2) = true I0201 18:00:10.214824 307281 root.go:291] Updating PATH: /home/airflow/.minikube/bin I0201 18:00:10.215421 307281 out.go:236] Setting JSON to false I0201 18:00:10.219600 307281 start.go:106] hostinfo: {"hostname":"minikube","uptime":5202766,"bootTime":1607010444,"procs":336,"os":"linux","platform":"redhat","platformFamily":"rhel","platformVersion":"7.9","kernelVersion":"3.10.0-1160.2.1.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2c37bd6b-e6fd-429b-ae3a-0f39108e98d7"} I0201 18:00:10.219672 307281 start.go:116] virtualization:
I0201 18:00:10.222048 307281 out.go:119] ๐Ÿ˜„ minikube v1.17.1 on Redhat 7.9 ๐Ÿ˜„ minikube v1.17.1 on Redhat 7.9 I0201 18:00:10.222198 307281 driver.go:315] Setting default libvirt URI to qemu:///system I0201 18:00:10.222341 307281 notify.go:126] Checking for updates... I0201 18:00:10.242171 307281 docker.go:115] docker version: linux-1.13.1 I0201 18:00:10.242264 307281 cli_runner.go:111] Run: docker system info --format "{{json .}}" I0201 18:00:10.280336 307281 info.go:253] docker info: {ID:NAM4:SLB2:DDEF:25CD:QDXT:PVSK:V3EA:7VDJ:OWII:7NKY:SYUE:MCBS Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[rhel-push-plugin] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:31 SystemTime:2021-02-01 18:00:10.268210329 -0300 -03 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-1160.2.1.el7.x86_64 OperatingSystem:Red Hat OSType:linux Architecture:x86_64 IndexServerAddress:https://registry.access.redhat.com/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33566658560 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://cla20996:07CF04c8@amba-proxy3.ctimovil.net:8083 HTTPSProxy:http://cla20996:07CF04c8@amba-proxy3.ctimovil.net:8083 NoProxy: Name:minikube Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:docker-runc}} DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:66aedde759f33c190954815fb765eedc1d782dd9 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0201 18:00:10.280686 307281 docker.go:145] overlay module found I0201 18:00:10.282347 307281 out.go:119] โœจ Using the docker driver based on user configuration โœจ Using the docker driver based on user configuration I0201 18:00:10.282378 307281 start.go:279] selected driver: docker I0201 18:00:10.282389 307281 start.go:702] validating driver "docker" against I0201 18:00:10.282413 307281 start.go:713] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0201 18:00:10.283906 307281 cli_runner.go:111] Run: docker system info --format "{{json .}}" I0201 18:00:10.320586 307281 info.go:253] docker info: {ID:NAM4:SLB2:DDEF:25CD:QDXT:PVSK:V3EA:7VDJ:OWII:7NKY:SYUE:MCBS Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[rhel-push-plugin] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:31 SystemTime:2021-02-01 18:00:10.309533359 -0300 -03 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-1160.2.1.el7.x86_64 OperatingSystem:Red Hat OSType:linux Architecture:x86_64 IndexServerAddress:https://registry.access.redhat.com/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33566658560 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://cla20996:07CF04c8@amba-proxy3.ctimovil.net:8083 HTTPSProxy:http://cla20996:07CF04c8@amba-proxy3.ctimovil.net:8083 NoProxy: Name:minikube Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:docker-runc}} DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:66aedde759f33c190954815fb765eedc1d782dd9 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0201 18:00:10.320702 307281 start_flags.go:249] no existing cluster config was found, will generate one from the flags I0201 18:00:10.320866 307281 start_flags.go:671] Wait components to verify : map[apiserver:true system_pods:true] I0201 18:00:10.320901 307281 cni.go:74] Creating CNI manager for "" I0201 18:00:10.320916 307281 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0201 18:00:10.320925 307281 start_flags.go:390] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0201 18:00:10.323128 307281 out.go:119] ๐Ÿ‘ Starting control plane node minikube in cluster minikube ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0201 18:00:10.350138 307281 cache.go:120] Beginning downloading kic base image for docker with docker I0201 18:00:10.351961 307281 out.go:119] ๐Ÿšœ Pulling base image ... ๐Ÿšœ Pulling base image ... I0201 18:00:10.352015 307281 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0201 18:00:10.352052 307281 preload.go:105] Found local preload: /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4 I0201 18:00:10.352059 307281 cache.go:54] Caching tarball of preloaded images I0201 18:00:10.352072 307281 preload.go:131] Found /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0201 18:00:10.352078 307281 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0201 18:00:10.352209 307281 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local daemon I0201 18:00:10.352236 307281 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local daemon I0201 18:00:10.352364 307281 profile.go:148] Saving config to /home/airflow/.minikube/profiles/minikube/config.json ... I0201 18:00:10.352401 307281 lock.go:36] WriteFile acquiring /home/airflow/.minikube/profiles/minikube/config.json: {Name:mka915f8e4b287f045762c6107795e5ce4984fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0201 18:00:47.705392 307281 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e I0201 18:00:47.705430 307281 cache.go:185] Successfully downloaded all kic artifacts I0201 18:00:47.705518 307281 start.go:313] acquiring machines lock for minikube: {Name:mk879ad47d06530d9f9207167e7720a9e983a9a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0201 18:00:47.705670 307281 start.go:317] acquired machines lock for "minikube" in 127.737ยตs I0201 18:00:47.705701 307281 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:8192 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0201 18:00:47.705784 307281 start.go:126] createHost starting for "" (driver="docker") I0201 18:00:47.708129 307281 out.go:140] ๐Ÿ”ฅ Creating docker container (CPUs=4, Memory=8192MB) ... ๐Ÿ”ฅ Creating docker container (CPUs=4, Memory=8192MB) ...| I0201 18:00:47.708349 307281 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0201 18:00:47.708393 307281 client.go:168] LocalClient.Create starting I0201 18:00:47.708709 307281 main.go:119] libmachine: Reading certificate data from /home/airflow/.minikube/certs/ca.pem I0201 18:00:47.708744 307281 main.go:119] libmachine: Decoding PEM data... I0201 18:00:47.708764 307281 main.go:119] libmachine: Parsing certificate... I0201 18:00:47.708881 307281 main.go:119] libmachine: Reading certificate data from /home/airflow/.minikube/certs/cert.pem I0201 18:00:47.708905 307281 main.go:119] libmachine: Decoding PEM data... I0201 18:00:47.708922 307281 main.go:119] libmachine: Parsing certificate... I0201 18:00:47.709308 307281 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" W0201 18:00:47.729774 307281 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 64 I0201 18:00:47.730095 307281 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0201 18:00:47.730122 307281 cli_runner.go:111] Run: docker network inspect minikube I0201 18:00:47.749503 307281 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [ { "Name": "minikube", "Id": "d251543ce5c0af63a2b9061c57fdedef58b813e7f26b08c24a5d090e9e5425ea", "Created": "2021-02-01T17:59:24.175763378-03:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.19.0.0/16", "Gateway": "172.19.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": { "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]

-- /stdout -- I0201 18:00:47.749653 307281 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" W0201 18:00:47.767236 307281 cli_runner.go:149] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 64 I0201 18:00:47.767303 307281 network_create.go:249] running [docker network inspect bridge] to gather additional debugging logs... I0201 18:00:47.767322 307281 cli_runner.go:111] Run: docker network inspect bridge I0201 18:00:47.784991 307281 network_create.go:254] output of [docker network inspect bridge]: -- stdout -- [ { "Name": "bridge", "Id": "2333d024af47edca04dd0663b30cc2f2146655ac9212be89af7f322e5912bc13", "Created": "2020-12-03T12:51:05.591694707-03:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]

-- /stdout -- W0201 18:00:47.785075 307281 network_create.go:71] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand I0201 18:00:47.785107 307281 network_create.go:104] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 0 ... I0201 18:00:47.785165 307281 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube W0201 18:00:47.803081 307281 cli_runner.go:149] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube returned with exit code 1 E0201 18:00:47.803176 307281 network_create.go:85] error while trying to create network create network minikube 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 1 stdout:

stderr: Error response from daemon: network with name minikube already exists W0201 18:00:47.803961 307281 out.go:181] โ— Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network minikube 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 1 stdout:

stderr: Error response from daemon: network with name minikube already exists

โ— Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network minikube 192.168.49.0/24: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 1 stdout:

stderr: Error response from daemon: network with name minikube already exists

I0201 18:00:47.804053 307281 cli_runner.go:111] Run: docker ps -a --format {{.Names}} I0201 18:00:47.822485 307281 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0201 18:00:47.842260 307281 oci.go:102] Successfully created a docker volume minikube I0201 18:00:47.842337 307281 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib I0201 18:00:48.585335 307281 oci.go:106] Successfully prepared a docker volume minikube I0201 18:00:48.585421 307281 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker W0201 18:00:48.585684 307281 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I0201 18:00:48.585850 307281 preload.go:105] Found local preload: /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4 I0201 18:00:48.585859 307281 kic.go:163] Starting extracting preloaded images to volume ... I0201 18:00:48.585912 307281 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir I0201 18:00:48.586062 307281 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'" I0201 18:00:48.627638 307281 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --memory=8192mb --memory-swap=8192mb --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e I0201 18:00:49.012248 307281 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Running}} I0201 18:00:49.040609 307281 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0201 18:00:49.069804 307281 cli_runner.go:111] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables W0201 18:00:49.129157 307281 cli_runner.go:149] docker run --rm --entrypoint /usr/bin/tar -v /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 2 I0201 18:00:49.129221 307281 kic.go:170] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/airflow/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 2 stdout:

stderr: tar (child): /preloaded.tar: Cannot open: Permission denied tar (child): Error is not recoverable: exiting now /usr/bin/tar: Child returned status 2 /usr/bin/tar: Error is not recoverable: exiting now I0201 18:00:49.143565 307281 oci.go:246] the created container "minikube" has a running status. I0201 18:00:49.143600 307281 kic.go:194] Creating ssh key for kic: /home/airflow/.minikube/machines/minikube/id_rsa... I0201 18:00:49.533999 307281 kic_runner.go:188] docker (temp): /home/airflow/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0201 18:00:49.686203 307281 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0201 18:00:49.709315 307281 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0201 18:00:49.709344 307281 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0201 18:00:49.781435 307281 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0201 18:00:49.803977 307281 machine.go:88] provisioning docker machine ... I0201 18:00:49.804053 307281 ubuntu.go:169] provisioning hostname "minikube" I0201 18:00:49.804127 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:00:49.828415 307281 main.go:119] libmachine: Using SSH client type: native I0201 18:00:49.828715 307281 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 32839 } I0201 18:00:49.828748 307281 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0201 18:00:49.830913 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58738->127.0.0.1:32839: read: connection reset by peer I0201 18:00:52.832220 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58742->127.0.0.1:32839: read: connection reset by peer I0201 18:00:55.833404 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58746->127.0.0.1:32839: read: connection reset by peer I0201 18:00:58.836276 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58750->127.0.0.1:32839: read: connection reset by peer I0201 18:01:01.839103 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58764->127.0.0.1:32839: read: connection reset by peer I0201 18:01:04.841966 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58768->127.0.0.1:32839: read: connection reset by peer I0201 18:01:07.842803 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58772->127.0.0.1:32839: read: connection reset by peer I0201 18:01:10.843636 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58776->127.0.0.1:32839: read: connection reset by peer I0201 18:01:13.844466 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58780->127.0.0.1:32839: read: connection reset by peer I0201 18:01:16.845326 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58794->127.0.0.1:32839: read: connection reset by peer I0201 18:01:19.846155 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58798->127.0.0.1:32839: read: connection reset by peer I0201 18:01:22.849453 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58802->127.0.0.1:32839: read: connection reset by peer I0201 18:01:25.850527 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58806->127.0.0.1:32839: read: connection reset by peer I0201 18:01:28.851400 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58810->127.0.0.1:32839: read: connection reset by peer I0201 18:01:31.853613 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58824->127.0.0.1:32839: read: connection reset by peer I0201 18:01:34.856472 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58828->127.0.0.1:32839: read: connection reset by peer I0201 18:01:37.857271 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58832->127.0.0.1:32839: read: connection reset by peer I0201 18:01:40.857944 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58836->127.0.0.1:32839: read: connection reset by peer I0201 18:01:43.858668 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58840->127.0.0.1:32839: read: connection reset by peer I0201 18:01:46.859689 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58854->127.0.0.1:32839: read: connection reset by peer I0201 18:01:49.860637 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58858->127.0.0.1:32839: read: connection reset by peer I0201 18:01:52.861387 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58862->127.0.0.1:32839: read: connection reset by peer I0201 18:01:55.862946 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58866->127.0.0.1:32839: read: connection reset by peer I0201 18:01:58.866009 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58870->127.0.0.1:32839: read: connection reset by peer I0201 18:02:01.868287 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58884->127.0.0.1:32839: read: connection reset by peer I0201 18:02:04.871207 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58888->127.0.0.1:32839: read: connection reset by peer I0201 18:02:07.873638 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58892->127.0.0.1:32839: read: connection reset by peer I0201 18:02:10.875203 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58896->127.0.0.1:32839: read: connection reset by peer I0201 18:02:13.877964 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58900->127.0.0.1:32839: read: connection reset by peer I0201 18:02:16.879038 307281 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58914->127.0.0.1:32839: read: connection reset by peer I0201 18:02:20.016019 307281 main.go:119] libmachine: SSH cmd err, output: : minikube

I0201 18:02:20.016779 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:20.044949 307281 main.go:119] libmachine: Using SSH client type: native I0201 18:02:20.045299 307281 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 32839 } I0201 18:02:20.045329 307281 main.go:119] libmachine: About to run SSH command:

    if ! grep -xq '.*\sminikube' /etc/hosts; then
        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
        else 
            echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
        fi
    fi

I0201 18:02:20.164192 307281 main.go:119] libmachine: SSH cmd err, output: : I0201 18:02:20.164867 307281 ubuntu.go:175] set auth options {CertDir:/home/airflow/.minikube CaCertPath:/home/airflow/.minikube/certs/ca.pem CaPrivateKeyPath:/home/airflow/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/airflow/.minikube/machines/server.pem ServerKeyPath:/home/airflow/.minikube/machines/server-key.pem ClientKeyPath:/home/airflow/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/airflow/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/airflow/.minikube} I0201 18:02:20.164906 307281 ubuntu.go:177] setting up certificates I0201 18:02:20.164917 307281 provision.go:83] configureAuth start I0201 18:02:20.164975 307281 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0201 18:02:20.186736 307281 provision.go:137] copyHostCerts I0201 18:02:20.186807 307281 exec_runner.go:145] found /home/airflow/.minikube/ca.pem, removing ... I0201 18:02:20.186823 307281 exec_runner.go:190] rm: /home/airflow/.minikube/ca.pem I0201 18:02:20.186892 307281 exec_runner.go:152] cp: /home/airflow/.minikube/certs/ca.pem --> /home/airflow/.minikube/ca.pem (1082 bytes) I0201 18:02:20.187026 307281 exec_runner.go:145] found /home/airflow/.minikube/cert.pem, removing ... I0201 18:02:20.187039 307281 exec_runner.go:190] rm: /home/airflow/.minikube/cert.pem I0201 18:02:20.187087 307281 exec_runner.go:152] cp: /home/airflow/.minikube/certs/cert.pem --> /home/airflow/.minikube/cert.pem (1123 bytes) I0201 18:02:20.187178 307281 exec_runner.go:145] found /home/airflow/.minikube/key.pem, removing ... I0201 18:02:20.187189 307281 exec_runner.go:190] rm: /home/airflow/.minikube/key.pem I0201 18:02:20.187224 307281 exec_runner.go:152] cp: /home/airflow/.minikube/certs/key.pem --> /home/airflow/.minikube/key.pem (1675 bytes) I0201 18:02:20.187331 307281 provision.go:111] generating server cert: /home/airflow/.minikube/machines/server.pem ca-key=/home/airflow/.minikube/certs/ca.pem private-key=/home/airflow/.minikube/certs/ca-key.pem org=airflow.minikube san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0201 18:02:20.338270 307281 provision.go:165] copyRemoteCerts I0201 18:02:20.338691 307281 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0201 18:02:20.338754 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:20.366443 307281 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/airflow/.minikube/machines/minikube/id_rsa Username:docker} I0201 18:02:20.461759 307281 ssh_runner.go:310] scp /home/airflow/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0201 18:02:20.490444 307281 ssh_runner.go:310] scp /home/airflow/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0201 18:02:20.518550 307281 ssh_runner.go:310] scp /home/airflow/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0201 18:02:20.546768 307281 provision.go:86] duration metric: configureAuth took 381.822496ms I0201 18:02:20.546812 307281 ubuntu.go:193] setting minikube options for container-runtime I0201 18:02:20.547087 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:20.569456 307281 main.go:119] libmachine: Using SSH client type: native I0201 18:02:20.569713 307281 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 32839 } I0201 18:02:20.569743 307281 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0201 18:02:20.695090 307281 main.go:119] libmachine: SSH cmd err, output: : overlay

I0201 18:02:20.695126 307281 ubuntu.go:71] root file system type: overlay I0201 18:02:20.695403 307281 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0201 18:02:20.695490 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:20.721051 307281 main.go:119] libmachine: Using SSH client type: native I0201 18:02:20.721244 307281 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 32839 } I0201 18:02:20.721335 307281 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket

[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60

Environment="HTTP_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083" Environment="HTTPS_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083" Environment="NO_PROXY=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24"

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0201 18:02:20.857552 307281 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket

[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60

Environment=HTTP_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 Environment=HTTPS_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 Environment=NO_PROXY=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0201 18:02:20.857749 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:20.883739 307281 main.go:119] libmachine: Using SSH client type: native I0201 18:02:20.883904 307281 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 32839 } I0201 18:02:20.883927 307281 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0201 18:02:27.646253 307281 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-12-28 16:15:19.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-02-01 21:02:20.854512400 +0000 @@ -1,30 +1,35 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket

[Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. +Restart=on-failure StartLimitBurst=3 +StartLimitIntervalSec=60

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s +Environment=HTTP_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 +Environment=HTTPS_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 +Environment=NO_PROXY=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +37,16 @@ LimitNPROC=infinity LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process -OOMScoreAdjust=-500

[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker

I0201 18:02:27.646346 307281 machine.go:91] provisioned docker machine in 1m37.842329412s I0201 18:02:27.646357 307281 client.go:171] LocalClient.Create took 1m39.937954336s I0201 18:02:27.646367 307281 start.go:168] duration metric: libmachine.API.Create for "minikube" took 1m39.938018908s I0201 18:02:27.646381 307281 start.go:267] post-start starting for "minikube" (driver="docker") I0201 18:02:27.646390 307281 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0201 18:02:27.646682 307281 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0201 18:02:27.646732 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:27.672636 307281 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/airflow/.minikube/machines/minikube/id_rsa Username:docker} I0201 18:02:27.767136 307281 ssh_runner.go:149] Run: cat /etc/os-release I0201 18:02:27.770983 307281 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0201 18:02:27.771018 307281 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0201 18:02:27.771036 307281 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0201 18:02:27.771054 307281 info.go:137] Remote host: Ubuntu 20.04.1 LTS I0201 18:02:27.771076 307281 filesync.go:118] Scanning /home/airflow/.minikube/addons for local assets ... I0201 18:02:27.771129 307281 filesync.go:118] Scanning /home/airflow/.minikube/files for local assets ... I0201 18:02:27.771158 307281 start.go:270] post-start completed in 124.767333ms I0201 18:02:27.771607 307281 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0201 18:02:27.798559 307281 profile.go:148] Saving config to /home/airflow/.minikube/profiles/minikube/config.json ... I0201 18:02:27.798887 307281 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0201 18:02:27.798942 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:27.822686 307281 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/airflow/.minikube/machines/minikube/id_rsa Username:docker} I0201 18:02:27.912281 307281 start.go:129] duration metric: createHost completed in 1m40.206478926s I0201 18:02:27.912310 307281 start.go:80] releasing machines lock for "minikube", held for 1m40.20662578s I0201 18:02:27.912406 307281 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0201 18:02:27.937623 307281 out.go:119] ๐ŸŒ Found network options: ๐ŸŒ Found network options: I0201 18:02:27.940617 307281 out.go:119] โ–ช http_proxy=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 โ–ช http_proxy=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 W0201 18:02:27.940727 307281 out.go:181] โ— You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.17.0.3). โ— You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.17.0.3). I0201 18:02:27.951328 307281 out.go:119] ๐Ÿ“˜ Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details ๐Ÿ“˜ Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details I0201 18:02:27.954166 307281 out.go:119] โ–ช https_proxy=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 โ–ช https_proxy=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 I0201 18:02:27.959274 307281 out.go:119] โ–ช no_proxy=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24 โ–ช no_proxy=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24 I0201 18:02:27.959513 307281 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0201 18:02:27.959605 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:27.959649 307281 ssh_runner.go:149] Run: systemctl --version I0201 18:02:27.959700 307281 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0201 18:02:27.986171 307281 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/airflow/.minikube/machines/minikube/id_rsa Username:docker} I0201 18:02:27.986710 307281 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/airflow/.minikube/machines/minikube/id_rsa Username:docker} I0201 18:02:33.121143 307281 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (5.16159271s) W0201 18:02:33.121180 307281 start.go:509] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28 stdout:

stderr: curl: (28) Resolving timed out after 2000 milliseconds I0201 18:02:33.121213 307281 ssh_runner.go:189] Completed: systemctl --version: (5.16154351s) I0201 18:02:33.121286 307281 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd W0201 18:02:33.121315 307281 out.go:181] โ— This container is having trouble accessing https://k8s.gcr.io โ— This container is having trouble accessing https://k8s.gcr.io W0201 18:02:33.121353 307281 out.go:181] ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ ๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0201 18:02:33.134595 307281 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0201 18:02:33.146730 307281 cruntime.go:200] skipping containerd shutdown because we are bound to it I0201 18:02:33.146795 307281 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0201 18:02:33.158966 307281 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0201 18:02:33.177122 307281 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0201 18:02:33.189177 307281 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0201 18:02:33.264427 307281 ssh_runner.go:149] Run: sudo systemctl start docker I0201 18:02:33.278283 307281 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0201 18:02:33.356212 307281 out.go:140] ๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.2 ... ๐Ÿณ Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...| I0201 18:02:33.363587 307281 out.go:119] โ–ช env HTTP_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083

โ–ช env HTTP_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083

I0201 18:02:33.369108 307281 out.go:119] โ–ช env HTTPS_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 โ–ช env HTTPS_PROXY=http://dockerproxy:2Bprk1mcYpb7mPr@amba-proxy3.ctimovil.net:8083 I0201 18:02:33.373517 307281 out.go:119] โ–ช env NO_PROXY=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24 โ–ช env NO_PROXY=localhost,127.0.0.1,localaddress,.claro.amx,10.92.194.0/24 I0201 18:02:33.373615 307281 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" W0201 18:02:33.393070 307281 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 64 I0201 18:02:33.393145 307281 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0201 18:02:33.393168 307281 cli_runner.go:111] Run: docker network inspect minikube I0201 18:02:33.411686 307281 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [ { "Name": "minikube", "Id": "d251543ce5c0af63a2b9061c57fdedef58b813e7f26b08c24a5d090e9e5425ea", "Created": "2021-02-01T17:59:24.175763378-03:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.19.0.0/16", "Gateway": "172.19.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": { "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]

-- /stdout -- E0201 18:02:33.411776 307281 start.go:99] Unable to get host IP: network inspect: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand E0201 18:02:33.412443 307281 out.go:330] unable to execute Failed to setup kubeconfig: network inspect: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand : template: Failed to setup kubeconfig: network inspect: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand :1:253: executing "Failed to setup kubeconfig: network inspect: docker network inspect minikube --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}},{{$first := true}} \"ContainerIPs\": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}\"{{$v.IPv4Address}}\"{{end}}]}\": exit status 64\nstdout:\n\nstderr:\nTemplate parsing error: template: :1: unexpected \"=\" in operand\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string. I0201 18:02:33.429322 307281 out.go:119]

W0201 18:02:33.429480 307281 out.go:181] โŒ Exiting due to GUEST_START: Failed to setup kubeconfig: network inspect: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand

โŒ Exiting due to GUEST_START: Failed to setup kubeconfig: network inspect: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}": exit status 64 stdout:

stderr: Template parsing error: template: :1: unexpected "=" in operand

W0201 18:02:33.429551 307281 out.go:181]

W0201 18:02:33.429580 307281 out.go:181] ๐Ÿ˜ฟ If the above advice does not help, please let us know: ๐Ÿ˜ฟ If the above advice does not help, please let us know: W0201 18:02:33.429614 307281 out.go:181] ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose I0201 18:02:33.445673 307281 out.go:119]

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Mon 2021-02-01 21:01:36 UTC, end at Mon 2021-02-01 21:04:53 UTC. -- Feb 01 21:02:20 minikube systemd[1]: Starting Docker Application Container Engine... Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.872595098Z" level=info msg="Starting up" Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.875160030Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.875186078Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.875207276Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.875224916Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.877193738Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.877217054Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.877235989Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.877245305Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 01 21:02:20 minikube dockerd[247]: time="2021-02-01T21:02:20.925256320Z" level=info msg="Loading containers: start." Feb 01 21:02:21 minikube dockerd[247]: time="2021-02-01T21:02:21.012432307Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 01 21:02:21 minikube dockerd[247]: time="2021-02-01T21:02:21.063716282Z" level=info msg="Loading containers: done." Feb 01 21:02:21 minikube dockerd[247]: time="2021-02-01T21:02:21.086722832Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2 Feb 01 21:02:21 minikube dockerd[247]: time="2021-02-01T21:02:21.086866002Z" level=info msg="Daemon has completed initialization" Feb 01 21:02:21 minikube dockerd[247]: time="2021-02-01T21:02:21.131728109Z" level=info msg="API listen on /run/docker.sock" Feb 01 21:02:21 minikube systemd[1]: Started Docker Application Container Engine. Feb 01 21:02:25 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 01 21:02:25 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Feb 01 21:02:27 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 01 21:02:27 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 01 21:02:27 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 01 21:02:27 minikube systemd[1]: Stopping Docker Application Container Engine... Feb 01 21:02:27 minikube dockerd[247]: time="2021-02-01T21:02:27.341476895Z" level=info msg="Processing signal 'terminated'" Feb 01 21:02:27 minikube dockerd[247]: time="2021-02-01T21:02:27.342856985Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Feb 01 21:02:27 minikube dockerd[247]: time="2021-02-01T21:02:27.343698463Z" level=info msg="Daemon shutdown complete" Feb 01 21:02:27 minikube systemd[1]: docker.service: Succeeded. Feb 01 21:02:27 minikube systemd[1]: Stopped Docker Application Container Engine. Feb 01 21:02:27 minikube systemd[1]: Starting Docker Application Container Engine... Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.432553157Z" level=info msg="Starting up" Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.435562979Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.435596346Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.435629555Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.435655054Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.437046386Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.437078741Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.437104772Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.437122103Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.447319942Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.452180436Z" level=info msg="Loading containers: start." Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.558217720Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.605840297Z" level=info msg="Loading containers: done." Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.631458415Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2 Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.631565610Z" level=info msg="Daemon has completed initialization" Feb 01 21:02:27 minikube systemd[1]: Started Docker Application Container Engine. Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.653069541Z" level=info msg="API listen on [::]:2376" Feb 01 21:02:27 minikube dockerd[431]: time="2021-02-01T21:02:27.657813532Z" level=info msg="API listen on /var/run/docker.sock" Feb 01 21:02:33 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. ==> container status <== time="2021-02-01T21:04:55Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ==> describe nodes <== E0201 18:04:56.048480 308972 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: sudo: /var/lib/minikube/binaries/v1.20.2/kubectl: command not found output: "\n** stderr ** \nsudo: /var/lib/minikube/binaries/v1.20.2/kubectl: command not found\n\n** /stderr **" ==> dmesg <== [ +0.112209] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.059285] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.000325] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Dec28 20:23] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan12 15:40] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan25 14:55] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.103953] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.124355] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.029343] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.001086] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan25 15:22] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.090549] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.148278] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.000022] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.000447] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan25 15:24] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.066174] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.200829] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.004923] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.000453] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 16:18] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 16:19] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 18:13] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.531227] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 18:20] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.456936] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 18:25] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.553958] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 18:28] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.503255] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 19:45] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 19:59] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Jan29 20:39] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +45.483883] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:11] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.535902] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:14] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.563247] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:18] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.682996] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:25] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.525818] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:37] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.531490] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:48] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.505651] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 19:51] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.542844] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 20:27] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.514634] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 20:31] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.540183] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 20:43] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.478241] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 20:49] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.505268] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 20:57] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.555675] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [Feb 1 21:00] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.499809] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) ==> kernel <== 21:04:56 up 60 days, 5:17, 0 users, load average: 0.03, 0.04, 0.05 Linux minikube 3.10.0-1160.2.1.el7.x86_64 #1 SMP Mon Sep 21 21:00:09 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kubelet <== -- Logs begin at Mon 2021-02-01 21:01:36 UTC, end at Mon 2021-02-01 21:04:56 UTC. -- -- No entries -- โ— unable to fetch logs for: describe nodes
afbjorklund commented 3 years ago

The docker version 1.13.1 is too old, you will need to upgrade. Or you could use VM to run minikube.

Related to #10282 (and #10089, for supporting old system versions rather than the vendor versions)

medyagh commented 3 years ago

@AlexDag in latest version of minikube we added support for Ancient versions of docker I still recommend u upgrade the docker version but if u have to use an ancient docker verison the latest minikube version (1.18.1) works with super old docker versions too