kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.85k stars 4.82k forks source link

kic-base image gets downloaded every single time (when main image is not available ) #11102

Closed medyagh closed 3 years ago

medyagh commented 3 years ago

to reproduce:

add this line to /etc/hosts to block gcr.io access

127.0.0.1   gcr.io
$ minikube start --container-runtime=containerd --cni=cilium
πŸ˜„  minikube v1.19.0 on Darwin 11.2.3
✨  Automatically selected the docker driver. Other choices: hyperkit, parallels, virtualbox, ssh
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > index.docker.io/kicbase/sta...: 357.67 MiB / 357.67 MiB  100.00% 24.42 Mi
❗  minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.20, but successfully downloaded kicbase/stable:v0.0.20 as a fallback image
πŸ”₯  Creating docker container (CPUs=2, Memory=4000MB) ...
πŸ“¦  Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”—  Configuring Cilium (Container Networking Interface) ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

then

minikube delete

then run it again

$ minikube start --container-runtime=containerd --cni=cilium --wait=all
πŸ˜„  minikube v1.19.0 on Darwin 11.2.3
✨  Automatically selected the docker driver. Other choices: hyperkit, parallels, virtualbox, ssh
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > index.docker.io/kicbase/sta...: 357.67 MiB / 357.67 MiB  100.00% 26.23 Mi
❗  minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.20, but successfully downloaded kicbase/stable:v0.0.20 as a fallback image
πŸ”₯  Creating docker container (CPUs=2, Memory=4000MB) ...\
afbjorklund commented 3 years ago

This will get fixed, if loading the image from the cache. But we only check if the image exists, for the original.

        if image.ExistsImageInDaemon(cc.KicBaseImage) {
                klog.Infof("%s exists in daemon, skipping pull", cc.KicBaseImage)
                return
        }
daehyeok commented 3 years ago

/assign

medyagh commented 3 years ago

I am trying this on M1. this seems to be happening even for NONE fall back images, my guess is on M1 the SHA is different

medya-macbookpro:Downloads medya$ docker images
REPOSITORY                    TAG       IMAGE ID       CREATED       SIZE
gcr.io/k8s-minikube/kicbase   <none>    c081d4d2d545   12 days ago   995MB
gcr.io/k8s-minikube/kicbase   v0.0.20   c6f4fc187bc1   12 days ago   1.09GB

the full output the start that shows image is being Re-pulled even though the step above shows the image


medya-macbookpro:Downloads medya$ 
medya-macbookpro:Downloads medya$ time minikube start --alsologtostderr
I0421 14:31:23.204031   81512 out.go:278] Setting OutFile to fd 1 ...
I0421 14:31:23.204171   81512 out.go:330] isatty.IsTerminal(1) = true
I0421 14:31:23.204174   81512 out.go:291] Setting ErrFile to fd 2...
I0421 14:31:23.204177   81512 out.go:330] isatty.IsTerminal(2) = true
I0421 14:31:23.204242   81512 root.go:317] Updating PATH: /Users/medya/.minikube/bin
I0421 14:31:23.204503   81512 out.go:285] Setting JSON to false
I0421 14:31:23.226389   81512 start.go:108] hostinfo: {"hostname":"medya-macbookpro.roam.corp.google.com","uptime":70122,"bootTime":1618970561,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"9fe8c0da-8ed0-381c-9cec-2a779f3e1503"}
W0421 14:31:23.226505   81512 start.go:116] gopshost.Virtualization returned error: not implemented yet
I0421 14:31:23.244820   81512 out.go:157] πŸ˜„  minikube v1.19.0 on Darwin 11.2.3 (arm64)
πŸ˜„  minikube v1.19.0 on Darwin 11.2.3 (arm64)
I0421 14:31:23.244955   81512 notify.go:126] Checking for updates...
I0421 14:31:23.245113   81512 driver.go:322] Setting default libvirt URI to qemu:///system
I0421 14:31:23.245124   81512 global.go:103] Querying for installed drivers using PATH=/Users/medya/.minikube/bin:/Users/medya/Downloads/google-cloud-sdk/bin:/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin
I0421 14:31:23.425796   81512 docker.go:119] docker version: linux-20.10.5
I0421 14:31:23.425894   81512 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0421 14:31:23.765758   81512 info.go:261] docker info: {ID:SNVM:5N7I:XTXY:DAKJ:PVJW:J4DA:27BY:3K6G:ICBH:FKAM:VHNZ:CTR3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2021-04-21 21:31:23.609650096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085613568 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0421 14:31:23.765847   81512 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0421 14:31:23.765904   81512 global.go:111] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/}
I0421 14:31:23.765940   81512 global.go:111] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/}
I0421 14:31:23.765972   81512 global.go:111] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0421 14:31:23.765978   81512 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0421 14:31:23.766018   81512 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0421 14:31:23.766047   81512 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0421 14:31:23.766053   81512 global.go:111] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
I0421 14:31:23.766062   81512 driver.go:258] not recommending "ssh" due to default: false
I0421 14:31:23.766070   81512 driver.go:292] Picked: docker
I0421 14:31:23.766073   81512 driver.go:293] Alternatives: [ssh]
I0421 14:31:23.766075   81512 driver.go:294] Rejects: [hyperkit parallels podman virtualbox vmware vmwarefusion]
I0421 14:31:23.803871   81512 out.go:157] ✨  Automatically selected the docker driver
✨  Automatically selected the docker driver
I0421 14:31:23.803906   81512 start.go:276] selected driver: docker
I0421 14:31:23.803914   81512 start.go:718] validating driver "docker" against <nil>
I0421 14:31:23.803937   81512 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0421 14:31:23.804202   81512 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0421 14:31:24.134203   81512 info.go:261] docker info: {ID:SNVM:5N7I:XTXY:DAKJ:PVJW:J4DA:27BY:3K6G:ICBH:FKAM:VHNZ:CTR3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2021-04-21 21:31:24.003487721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085613568 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0421 14:31:24.134354   81512 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
W0421 14:31:24.134392   81512 info.go:50] Unable to get CPU info: no such file or directory
W0421 14:31:24.136844   81512 start.go:881] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
W0421 14:31:24.136871   81512 info.go:50] Unable to get CPU info: no such file or directory
W0421 14:31:24.136873   81512 start.go:881] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I0421 14:31:24.136886   81512 start_flags.go:311] Using suggested 1988MB memory alloc based on sys=16384MB, container=1988MB
I0421 14:31:24.136976   81512 start_flags.go:730] Wait components to verify : map[apiserver:true system_pods:true]
I0421 14:31:24.136989   81512 cni.go:81] Creating CNI manager for ""
I0421 14:31:24.136993   81512 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0421 14:31:24.137003   81512 start_flags.go:270] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0421 14:31:24.155264   81512 out.go:157] πŸ‘  Starting control plane node minikube in cluster minikube
πŸ‘  Starting control plane node minikube in cluster minikube
I0421 14:31:24.155312   81512 image.go:107] Checking for gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon
I0421 14:31:24.368470   81512 cache.go:120] Beginning downloading kic base image for docker with docker
I0421 14:31:24.386091   81512 out.go:157] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0421 14:31:24.386111   81512 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0421 14:31:24.386152   81512 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4
I0421 14:31:24.386156   81512 cache.go:54] Caching tarball of preloaded images
I0421 14:31:24.386167   81512 preload.go:131] Found /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0421 14:31:24.386170   81512 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
I0421 14:31:24.386176   81512 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 to local daemon
I0421 14:31:24.386192   81512 image.go:162] Writing gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 to local daemon
I0421 14:31:24.386437   81512 profile.go:148] Saving config to /Users/medya/.minikube/profiles/minikube/config.json ...
I0421 14:31:24.386457   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/config.json: {Name:mkcfdcaaa21816d14cd9720660d7b2e91b28d741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
    > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB  100.00% 21.40 Mi
I0421 14:31:41.947272   81512 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
I0421 14:31:41.947287   81512 cache.go:185] Successfully downloaded all kic artifacts
I0421 14:31:41.947323   81512 start.go:313] acquiring machines lock for minikube: {Name:mk056ef9e1e774511ad280f3f358ff4888f064af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 14:31:41.947520   81512 start.go:317] acquired machines lock for "minikube" in 180.083Β΅s
I0421 14:31:41.947547   81512 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0421 14:31:41.947674   81512 start.go:126] createHost starting for "" (driver="docker")
I0421 14:31:41.984357   81512 out.go:184] πŸ”₯  Creating docker container (CPUs=2, Memory=1988MB) ...
πŸ”₯  Creating docker container (CPUs=2, Memory=1988MB) ...| I0421 14:31:41.984651   81512 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0421 14:31:41.984682   81512 client.go:168] LocalClient.Create starting
I0421 14:31:41.984893   81512 main.go:126] libmachine: Reading certificate data from /Users/medya/.minikube/certs/ca.pem
I0421 14:31:41.985087   81512 main.go:126] libmachine: Decoding PEM data...
I0421 14:31:41.985140   81512 main.go:126] libmachine: Parsing certificate...
I0421 14:31:41.985564   81512 main.go:126] libmachine: Reading certificate data from /Users/medya/.minikube/certs/cert.pem
I0421 14:31:41.985705   81512 main.go:126] libmachine: Decoding PEM data...
I0421 14:31:41.985745   81512 main.go:126] libmachine: Parsing certificate...
I0421 14:31:41.987432   81512 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
- W0421 14:31:42.201978   81512 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0421 14:31:42.202088   81512 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0421 14:31:42.202108   81512 cli_runner.go:115] Run: docker network inspect minikube
| W0421 14:31:42.394879   81512 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0421 14:31:42.394903   81512 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0421 14:31:42.394914   81512 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0421 14:31:42.394999   81512 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
/ I0421 14:31:42.585744   81512 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x1400000f2e0] misses:0}
I0421 14:31:42.585777   81512 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0421 14:31:42.585792   81512 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0421 14:31:42.585872   81512 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
| I0421 14:31:48.173266   81512 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube: (5.587445542s)
I0421 14:31:48.191441   81512 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0421 14:31:48.191514   81512 kic.go:102] calculated static IP "192.168.49.2" for the "minikube" container
I0421 14:31:48.191942   81512 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
- I0421 14:31:48.425381   81512 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
| I0421 14:31:48.621438   81512 oci.go:102] Successfully created a docker volume minikube
I0421 14:31:48.621562   81512 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -d /var/lib
- I0421 14:31:49.266926   81512 oci.go:106] Successfully prepared a docker volume minikube
I0421 14:31:49.267070   81512 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0421 14:31:49.267137   81512 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4
I0421 14:31:49.267162   81512 kic.go:175] Starting extracting preloaded images to volume ...
I0421 14:31:49.267176   81512 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0421 14:31:49.267273   81512 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir
/ I0421 14:31:49.582736   81512 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1988mb --memory-swap=1988mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
- I0421 14:31:59.159365   81512 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir: (9.892293334s)
I0421 14:31:59.159407   81512 kic.go:184] duration metric: took 9.892511 seconds to extract preloaded images to volume
\ I0421 14:31:59.692940   81512 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1988mb --memory-swap=1988mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6: (10.110417291s)
I0421 14:31:59.693049   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
/ I0421 14:31:59.884835   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
\ I0421 14:32:00.075761   81512 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
/ I0421 14:32:00.318663   81512 oci.go:278] the created container "minikube" has a running status.
I0421 14:32:00.318688   81512 kic.go:206] Creating ssh key for kic: /Users/medya/.minikube/machines/minikube/id_rsa...
\ I0421 14:32:00.449954   81512 kic_runner.go:188] docker (temp): /Users/medya/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
/ I0421 14:32:00.687182   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
\ I0421 14:32:00.877840   81512 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0421 14:32:00.877859   81512 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
/ I0421 14:32:01.124514   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
\ I0421 14:32:01.320153   81512 machine.go:88] provisioning docker machine ...
I0421 14:32:01.320182   81512 ubuntu.go:169] provisioning hostname "minikube"
I0421 14:32:01.320281   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ I0421 14:32:01.524784   81512 main.go:126] libmachine: Using SSH client type: native
I0421 14:32:01.524983   81512 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ca0c80] 0x100ca0c50 <nil>  [] 0s} 127.0.0.1 56422 <nil> <nil>}
I0421 14:32:01.524995   81512 main.go:126] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
- I0421 14:32:01.644544   81512 main.go:126] libmachine: SSH cmd err, output: <nil>: minikube

I0421 14:32:01.644662   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| I0421 14:32:01.843505   81512 main.go:126] libmachine: Using SSH client type: native
I0421 14:32:01.843670   81512 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ca0c80] 0x100ca0c50 <nil>  [] 0s} 127.0.0.1 56422 <nil> <nil>}
I0421 14:32:01.843684   81512 main.go:126] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
/ I0421 14:32:01.954559   81512 main.go:126] libmachine: SSH cmd err, output: <nil>: 
I0421 14:32:01.954585   81512 ubuntu.go:175] set auth options {CertDir:/Users/medya/.minikube CaCertPath:/Users/medya/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/medya/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/medya/.minikube/machines/server.pem ServerKeyPath:/Users/medya/.minikube/machines/server-key.pem ClientKeyPath:/Users/medya/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/medya/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/medya/.minikube}
I0421 14:32:01.954603   81512 ubuntu.go:177] setting up certificates
I0421 14:32:01.954608   81512 provision.go:83] configureAuth start
I0421 14:32:01.954734   81512 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
\ I0421 14:32:02.153067   81512 provision.go:137] copyHostCerts
I0421 14:32:02.153155   81512 exec_runner.go:145] found /Users/medya/.minikube/ca.pem, removing ...
I0421 14:32:02.153165   81512 exec_runner.go:190] rm: /Users/medya/.minikube/ca.pem
I0421 14:32:02.153841   81512 exec_runner.go:152] cp: /Users/medya/.minikube/certs/ca.pem --> /Users/medya/.minikube/ca.pem (1074 bytes)
I0421 14:32:02.154052   81512 exec_runner.go:145] found /Users/medya/.minikube/cert.pem, removing ...
I0421 14:32:02.154058   81512 exec_runner.go:190] rm: /Users/medya/.minikube/cert.pem
I0421 14:32:02.154360   81512 exec_runner.go:152] cp: /Users/medya/.minikube/certs/cert.pem --> /Users/medya/.minikube/cert.pem (1119 bytes)
I0421 14:32:02.154478   81512 exec_runner.go:145] found /Users/medya/.minikube/key.pem, removing ...
I0421 14:32:02.154483   81512 exec_runner.go:190] rm: /Users/medya/.minikube/key.pem
I0421 14:32:02.154739   81512 exec_runner.go:152] cp: /Users/medya/.minikube/certs/key.pem --> /Users/medya/.minikube/key.pem (1679 bytes)
I0421 14:32:02.154816   81512 provision.go:111] generating server cert: /Users/medya/.minikube/machines/server.pem ca-key=/Users/medya/.minikube/certs/ca.pem private-key=/Users/medya/.minikube/certs/ca-key.pem org=medya.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
/ I0421 14:32:02.349385   81512 provision.go:165] copyRemoteCerts
I0421 14:32:02.350531   81512 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0421 14:32:02.350585   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0421 14:32:02.542158   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
| I0421 14:32:02.624711   81512 ssh_runner.go:316] scp /Users/medya/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0421 14:32:02.639295   81512 ssh_runner.go:316] scp /Users/medya/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0421 14:32:02.654026   81512 ssh_runner.go:316] scp /Users/medya/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
/ I0421 14:32:02.668234   81512 provision.go:86] duration metric: configureAuth took 713.635459ms
I0421 14:32:02.668244   81512 ubuntu.go:193] setting minikube options for container-runtime
I0421 14:32:02.668463   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0421 14:32:02.864519   81512 main.go:126] libmachine: Using SSH client type: native
I0421 14:32:02.864691   81512 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ca0c80] 0x100ca0c50 <nil>  [] 0s} 127.0.0.1 56422 <nil> <nil>}
I0421 14:32:02.864702   81512 main.go:126] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
| I0421 14:32:02.977419   81512 main.go:126] libmachine: SSH cmd err, output: <nil>: overlay

I0421 14:32:02.977430   81512 ubuntu.go:71] root file system type: overlay
I0421 14:32:02.977866   81512 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0421 14:32:02.977980   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0421 14:32:03.191368   81512 main.go:126] libmachine: Using SSH client type: native
I0421 14:32:03.191520   81512 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ca0c80] 0x100ca0c50 <nil>  [] 0s} 127.0.0.1 56422 <nil> <nil>}
I0421 14:32:03.191579   81512 main.go:126] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
\ I0421 14:32:03.305573   81512 main.go:126] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0421 14:32:03.305715   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ I0421 14:32:03.496548   81512 main.go:126] libmachine: Using SSH client type: native
I0421 14:32:03.496709   81512 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ca0c80] 0x100ca0c50 <nil>  [] 0s} 127.0.0.1 56422 <nil> <nil>}
I0421 14:32:03.496720   81512 main.go:126] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
\ I0421 14:32:28.896033   81512 main.go:126] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-03-02 20:17:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2021-04-21 21:32:03.302378003 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0421 14:32:28.896135   81512 machine.go:91] provisioned docker machine in 27.576709458s
I0421 14:32:28.896147   81512 client.go:171] LocalClient.Create took 46.912734s
I0421 14:32:28.896169   81512 start.go:168] duration metric: libmachine.API.Create for "minikube" took 46.912796083s
I0421 14:32:28.896179   81512 start.go:267] post-start starting for "minikube" (driver="docker")
I0421 14:32:28.896185   81512 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0421 14:32:28.896748   81512 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0421 14:32:28.896844   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
- I0421 14:32:29.199316   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
\ I0421 14:32:29.285320   81512 ssh_runner.go:149] Run: cat /etc/os-release
I0421 14:32:29.288319   81512 main.go:126] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0421 14:32:29.288335   81512 main.go:126] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0421 14:32:29.288341   81512 main.go:126] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0421 14:32:29.288344   81512 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0421 14:32:29.288348   81512 filesync.go:118] Scanning /Users/medya/.minikube/addons for local assets ...
I0421 14:32:29.288463   81512 filesync.go:118] Scanning /Users/medya/.minikube/files for local assets ...
I0421 14:32:29.288501   81512 start.go:270] post-start completed in 392.327625ms
I0421 14:32:29.289555   81512 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
/ I0421 14:32:29.480206   81512 profile.go:148] Saving config to /Users/medya/.minikube/profiles/minikube/config.json ...
I0421 14:32:29.481139   81512 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0421 14:32:29.481182   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0421 14:32:29.671508   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
I0421 14:32:29.753231   81512 start.go:129] duration metric: createHost completed in 47.806842208s
I0421 14:32:29.753250   81512 start.go:80] releasing machines lock for "minikube", held for 47.807022709s
I0421 14:32:29.753401   81512 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
/ I0421 14:32:29.943417   81512 ssh_runner.go:149] Run: systemctl --version
I0421 14:32:29.943480   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0421 14:32:29.943803   81512 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0421 14:32:29.944832   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0421 14:32:30.139123   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
I0421 14:32:30.139144   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
- I0421 14:32:30.409454   81512 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0421 14:32:30.419573   81512 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0421 14:32:30.428259   81512 cruntime.go:219] skipping containerd shutdown because we are bound to it
I0421 14:32:30.428413   81512 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0421 14:32:30.436559   81512 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0421 14:32:30.445732   81512 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0421 14:32:30.453288   81512 ssh_runner.go:149] Run: sudo systemctl daemon-reload
\ I0421 14:32:30.500543   81512 ssh_runner.go:149] Run: sudo systemctl start docker
I0421 14:32:30.508843   81512 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
| I0421 14:32:30.644795   81512 out.go:184] 🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...

🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...I0421 14:32:30.645024   81512 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
\ I0421 14:32:30.949019   81512 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
I0421 14:32:30.949182   81512 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0421 14:32:30.952983   81512 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0421 14:32:30.960282   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
/ I0421 14:32:31.150295   81512 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0421 14:32:31.150330   81512 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-arm64.tar.lz4
I0421 14:32:31.150396   81512 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0421 14:32:31.181844   81512 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0421 14:32:31.181871   81512 docker.go:392] Images already preloaded, skipping extraction
I0421 14:32:31.181964   81512 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
- I0421 14:32:31.210037   81512 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0421 14:32:31.210053   81512 cache_images.go:74] Images are preloaded, skipping loading
I0421 14:32:31.210137   81512 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
\ I0421 14:32:31.324531   81512 cni.go:81] Creating CNI manager for ""
I0421 14:32:31.324546   81512 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0421 14:32:31.324551   81512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0421 14:32:31.324560   81512 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0421 14:32:31.324717   81512 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0421 14:32:31.324812   81512 kubeadm.go:897] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0421 14:32:31.324900   81512 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0421 14:32:31.330966   81512 binaries.go:44] Found k8s binaries, skipping transfer
I0421 14:32:31.331047   81512 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0421 14:32:31.337497   81512 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0421 14:32:31.346502   81512 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0421 14:32:31.355817   81512 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0421 14:32:31.365740   81512 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0421 14:32:31.368955   81512 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2   control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0421 14:32:31.376651   81512 certs.go:52] Setting up /Users/medya/.minikube/profiles/minikube for IP: 192.168.49.2
I0421 14:32:31.376782   81512 certs.go:171] skipping minikubeCA CA generation: /Users/medya/.minikube/ca.key
I0421 14:32:31.376827   81512 certs.go:171] skipping proxyClientCA CA generation: /Users/medya/.minikube/proxy-client-ca.key
I0421 14:32:31.376860   81512 certs.go:286] generating minikube-user signed cert: /Users/medya/.minikube/profiles/minikube/client.key
I0421 14:32:31.376874   81512 crypto.go:69] Generating cert /Users/medya/.minikube/profiles/minikube/client.crt with IP's: []
| I0421 14:32:31.460652   81512 crypto.go:157] Writing cert to /Users/medya/.minikube/profiles/minikube/client.crt ...
I0421 14:32:31.460671   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/client.crt: {Name:mk9f0f049cf5411b9e6488e0409ca2ec9ef19f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.461660   81512 crypto.go:165] Writing key to /Users/medya/.minikube/profiles/minikube/client.key ...
I0421 14:32:31.461669   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/client.key: {Name:mk7386675aa4b24f4a1112e06a53a33750e44951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.461818   81512 certs.go:286] generating minikube signed cert: /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0421 14:32:31.461822   81512 crypto.go:69] Generating cert /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
/ I0421 14:32:31.528831   81512 crypto.go:157] Writing cert to /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0421 14:32:31.528841   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk1b420d73491b5f7ccd0bb5ceb42e91cf0ff2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.529062   81512 crypto.go:165] Writing key to /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0421 14:32:31.529066   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkcdbaf1f7076ca1c8364219eeb3f099d0d3135e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.529156   81512 certs.go:297] copying /Users/medya/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/medya/.minikube/profiles/minikube/apiserver.crt
I0421 14:32:31.529388   81512 certs.go:301] copying /Users/medya/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/medya/.minikube/profiles/minikube/apiserver.key
I0421 14:32:31.529495   81512 certs.go:286] generating aggregator signed cert: /Users/medya/.minikube/profiles/minikube/proxy-client.key
I0421 14:32:31.529499   81512 crypto.go:69] Generating cert /Users/medya/.minikube/profiles/minikube/proxy-client.crt with IP's: []
- I0421 14:32:31.612641   81512 crypto.go:157] Writing cert to /Users/medya/.minikube/profiles/minikube/proxy-client.crt ...
I0421 14:32:31.612653   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd78d7d6ff919c02bc728dfedf8734089fc1c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.612854   81512 crypto.go:165] Writing key to /Users/medya/.minikube/profiles/minikube/proxy-client.key ...
I0421 14:32:31.612857   81512 lock.go:36] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/proxy-client.key: {Name:mk8d833a893ef6367116b3c6f13fc90a3a29812e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:31.613121   81512 certs.go:361] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/ca-key.pem (1675 bytes)
I0421 14:32:31.613152   81512 certs.go:361] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/ca.pem (1074 bytes)
I0421 14:32:31.613180   81512 certs.go:361] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/cert.pem (1119 bytes)
I0421 14:32:31.613205   81512 certs.go:361] found cert: /Users/medya/.minikube/certs/Users/medya/.minikube/certs/key.pem (1679 bytes)
I0421 14:32:31.614569   81512 ssh_runner.go:316] scp /Users/medya/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0421 14:32:31.643989   81512 ssh_runner.go:316] scp /Users/medya/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0421 14:32:31.658388   81512 ssh_runner.go:316] scp /Users/medya/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0421 14:32:31.673158   81512 ssh_runner.go:316] scp /Users/medya/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0421 14:32:31.688366   81512 ssh_runner.go:316] scp /Users/medya/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0421 14:32:31.702929   81512 ssh_runner.go:316] scp /Users/medya/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
\ I0421 14:32:31.715336   81512 ssh_runner.go:316] scp /Users/medya/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0421 14:32:31.729062   81512 ssh_runner.go:316] scp /Users/medya/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0421 14:32:31.742081   81512 ssh_runner.go:316] scp /Users/medya/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0421 14:32:31.755671   81512 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0421 14:32:31.765187   81512 ssh_runner.go:149] Run: openssl version
I0421 14:32:31.770621   81512 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0421 14:32:31.777141   81512 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0421 14:32:31.780266   81512 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Apr 21 21:24 /usr/share/ca-certificates/minikubeCA.pem
I0421 14:32:31.780337   81512 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0421 14:32:31.784736   81512 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0421 14:32:31.791145   81512 kubeadm.go:386] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0421 14:32:31.791258   81512 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
| I0421 14:32:31.816364   81512 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0421 14:32:31.824767   81512 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0421 14:32:31.830340   81512 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0421 14:32:31.830402   81512 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0421 14:32:31.835945   81512 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0421 14:32:31.835968   81512 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
- I0421 14:32:32.503962   81512 out.go:184]     β–ͺ Generating certificates and keys ...

    β–ͺ Generating certificates and keys ...- I0421 14:32:34.133733   81512 out.go:184]     β–ͺ Booting up control plane ...

    β–ͺ Booting up control plane ...\ I0421 14:32:53.699350   81512 out.go:184]     β–ͺ Configuring RBAC rules ...

    β–ͺ Configuring RBAC rules ...\ I0421 14:32:54.111099   81512 cni.go:81] Creating CNI manager for ""
I0421 14:32:54.111124   81512 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0421 14:32:54.111149   81512 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0421 14:32:54.111314   81512 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0421 14:32:54.111313   81512 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.19.0 minikube.k8s.io/commit=15cede53bdc5fe242228853e737333b09d4336b5 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_04_21T14_32_54_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
/ I0421 14:32:54.327800   81512 ops.go:34] apiserver oom_adj: -16
I0421 14:32:54.327903   81512 kubeadm.go:973] duration metric: took 216.741625ms to wait for elevateKubeSystemPrivileges.
I0421 14:32:54.327921   81512 kubeadm.go:388] StartCluster complete in 22.537391417s
I0421 14:32:54.327938   81512 settings.go:142] acquiring lock: {Name:mkd0284ca69bdf79a9a1575487bea0e283dfb439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0421 14:32:54.328134   81512 settings.go:150] Updating kubeconfig:  /Users/medya/.kube/config
I0421 14:32:54.329343   81512 lock.go:36] WriteFile acquiring /Users/medya/.kube/config: {Name:mk9fd218cbc52506c8b67871ae522c88260d21af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
- I0421 14:32:54.855692   81512 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0421 14:32:54.855741   81512 start.go:200] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0421 14:32:54.855856   81512 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0421 14:32:54.855915   81512 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0421 14:32:54.855931   81512 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W0421 14:32:54.855938   81512 addons.go:140] addon storage-provisioner should already be in state true
I0421 14:32:54.855956   81512 host.go:66] Checking if "minikube" exists ...
\ I0421 14:32:54.892031   81512 out.go:157] πŸ”Ž  Verifying Kubernetes components...

πŸ”Ž  Verifying Kubernetes components...
I0421 14:32:54.892687   81512 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0421 14:32:54.856132   81512 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0421 14:32:54.892940   81512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0421 14:32:54.856775   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0421 14:32:54.893900   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0421 14:32:54.912578   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0421 14:32:55.195290   81512 out.go:157]     β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0421 14:32:55.195430   81512 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0421 14:32:55.195438   81512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0421 14:32:55.195515   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0421 14:32:55.198784   81512 api_server.go:48] waiting for apiserver process to appear ...
I0421 14:32:55.198836   81512 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0421 14:32:55.201933   81512 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0421 14:32:55.201944   81512 addons.go:140] addon default-storageclass should already be in state true
I0421 14:32:55.201951   81512 host.go:66] Checking if "minikube" exists ...
I0421 14:32:55.202239   81512 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0421 14:32:55.210908   81512 api_server.go:68] duration metric: took 355.148417ms to wait for apiserver process to appear ...
I0421 14:32:55.210931   81512 api_server.go:84] waiting for apiserver healthz status ...
I0421 14:32:55.210936   81512 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:56421/healthz ...
I0421 14:32:55.225770   81512 api_server.go:241] https://127.0.0.1:56421/healthz returned 200:
ok
I0421 14:32:55.227717   81512 api_server.go:137] control plane version: v1.20.2
I0421 14:32:55.227732   81512 api_server.go:127] duration metric: took 16.798541ms to wait for apiserver health ...
I0421 14:32:55.227737   81512 system_pods.go:42] waiting for kube-system pods to appear ...
I0421 14:32:55.244336   81512 system_pods.go:58] 0 kube-system pods found
I0421 14:32:55.244355   81512 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0421 14:32:55.432428   81512 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0421 14:32:55.432444   81512 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0421 14:32:55.432520   81512 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0421 14:32:55.434259   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
I0421 14:32:55.513852   81512 system_pods.go:58] 0 kube-system pods found
I0421 14:32:55.513875   81512 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
I0421 14:32:55.542402   81512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0421 14:32:55.665397   81512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56422 SSHKeyPath:/Users/medya/.minikube/machines/minikube/id_rsa Username:docker}
I0421 14:32:55.778404   81512 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0421 14:32:55.901632   81512 system_pods.go:58] 1 kube-system pods found
I0421 14:32:55.901680   81512 system_pods.go:60] "storage-provisioner" [45da68fb-60c6-4c4a-b177-bb5005c417e5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0421 14:32:55.901693   81512 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0421 14:32:55.944601   81512 out.go:157] 🌟  Enabled addons: storage-provisioner, default-storageclass
🌟  Enabled addons: storage-provisioner, default-storageclass
I0421 14:32:55.944653   81512 addons.go:330] enableAddons completed in 1.088836583s
I0421 14:32:56.327753   81512 system_pods.go:58] 1 kube-system pods found
I0421 14:32:56.327788   81512 system_pods.go:60] "storage-provisioner" [45da68fb-60c6-4c4a-b177-bb5005c417e5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0421 14:32:56.327799   81512 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0421 14:32:56.804948   81512 system_pods.go:58] 1 kube-system pods found
I0421 14:32:56.804987   81512 system_pods.go:60] "storage-provisioner" [45da68fb-60c6-4c4a-b177-bb5005c417e5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0421 14:32:56.805000   81512 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0421 14:32:57.398563   81512 system_pods.go:58] 5 kube-system pods found
I0421 14:32:57.398599   81512 system_pods.go:60] "etcd-minikube" [d633a797-9914-474d-a692-4a80dddbd6a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0421 14:32:57.398611   81512 system_pods.go:60] "kube-apiserver-minikube" [4c4d0f21-99b6-4cfb-bc1a-7889f1ea184b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0421 14:32:57.398627   81512 system_pods.go:60] "kube-controller-manager-minikube" [d681ff4c-6a25-461c-9b35-f346383bb9d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0421 14:32:57.398645   81512 system_pods.go:60] "kube-scheduler-minikube" [5edaef8b-b9a3-4b23-91e9-34f6125784a9] Pending
I0421 14:32:57.398653   81512 system_pods.go:60] "storage-provisioner" [45da68fb-60c6-4c4a-b177-bb5005c417e5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0421 14:32:57.398661   81512 system_pods.go:73] duration metric: took 2.170978083s to wait for pod list to return data ...
I0421 14:32:57.398671   81512 kubeadm.go:543] duration metric: took 2.542974709s to wait for : map[apiserver:true system_pods:true] ...
I0421 14:32:57.398702   81512 node_conditions.go:102] verifying NodePressure condition ...
I0421 14:32:57.402665   81512 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0421 14:32:57.402687   81512 node_conditions.go:123] node cpu capacity is 4
I0421 14:32:57.402704   81512 node_conditions.go:105] duration metric: took 3.995208ms to run NodePressure ...
I0421 14:32:57.402716   81512 start.go:205] waiting for startup goroutines ...
I0421 14:32:57.500488   81512 start.go:460] kubectl: 1.19.7, cluster: 1.20.2 (minor skew: 1)
I0421 14:32:57.519094   81512 out.go:157] πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

real    1m34.457s
user    0m8.217s
sys 0m3.853s
medya-macbookpro:Downloads medya$ 
medyagh commented 3 years ago

here is the output of image inspect on Apple M1

$ docker image inspect gcr.io/k8s-minikube/kicbase:v0.0.20
[
    {
        "Id": "sha256:c6f4fc187bc15575face2a1d7ac05431f861a21ead31664e890c346c57bc1997",
        "RepoTags": [
            "gcr.io/k8s-minikube/kicbase:v0.0.20"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "buildkit.dockerfile.v0",
        "Created": "2021-04-09T19:15:51.394149587Z",
        "Container": "",
        "ContainerConfig": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": null,
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "DockerVersion": "",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "22/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=docker"
            ],
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/local/bin/entrypoint",
                "/sbin/init"
            ],
            "OnBuild": null,
            "Labels": null,
            "StopSignal": "SIGRTMIN+3"
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 1085338534,
        "VirtualSize": 1085338534,
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/7fba5576cc1205b8f433eb7e0c3e1d040f43c4d7751d2057e91d54b33f272739/diff:/var/lib/docker/overlay2/a8c96edf7e901cf344c02fa592b679da9190e776a240a3b7c85872fa3e68dba2/diff:/var/lib/docker/overlay2/a61016779559b40593742faaa60e0bab04c885da9fd9f65b1ffb11248d44dfae/diff:/var/lib/docker/overlay2/ada664e6df157538887b53c49b2554a83d8e4ac6e4ec3e939e939ecfdab71c7e/diff:/var/lib/docker/overlay2/bc44bf64d6590af8fc4f535922d503f4fe07ed410cb0e42f662b681d81953b27/diff:/var/lib/docker/overlay2/d53aa76b373c987268009d4b4f2bb34f162d45b0e2494853f4be4da745814cbd/diff:/var/lib/docker/overlay2/6060d42efd99d65a9345f56f2039d8a82555030841f7b1dfafd49002dae62957/diff:/var/lib/docker/overlay2/556bf52fabab3006f60d8d224026074ce03a04060ee285f6c60ad0d7a55da22e/diff:/var/lib/docker/overlay2/88c55dc1f1db4483a8722743ff96e0fe139f60eec0ccbaa73a015c743225c251/diff:/var/lib/docker/overlay2/7ae39d2809cd08d5b5367f3498206e4791e976588efd3aa66e7df83fd888314a/diff:/var/lib/docker/overlay2/626640f0a5a343cb1d74b53ffebb6fa76a0ba257f99cb6a3f224a3c7c6228696/diff:/var/lib/docker/overlay2/c0c2f10ee4e3cf7d209d104540bedbffc662aa804dfae6e9cb8549ff9508694f/diff:/var/lib/docker/overlay2/c33e26232ab794665651866c5dd496ba996d75a6b04b5dcfd59b1ea20c87e56b/diff:/var/lib/docker/overlay2/2d039e9c638fb89d3670e06c812fb3f2ed21ab7efcd2c074572d14115b55a04c/diff:/var/lib/docker/overlay2/31a903fd738ae7b8f49cf5b86c303a117e9bb43c4a6f07439c37aec042051d30/diff:/var/lib/docker/overlay2/1ae9479dc47e7d8d5bc173752a7e8fbb3206463dbadd3ab197194542d617146a/diff:/var/lib/docker/overlay2/2e70d08538c0e07818d9cf5ee7d1b62988c27f442a600177c3b955b21a9f1ec2/diff:/var/lib/docker/overlay2/35a6d75483a536730820fb495055e5715f6dfe9ce82bbbf2f5c96358a886c89d/diff:/var/lib/docker/overlay2/7196b1cacdd1cbaf9b606aec269eec05527eca6635bbc39147abc65361acd9d4/diff:/var/lib/docker/overlay2/a5cf97992669a0a7ab11c926f4f2cec6929d0df12ab204d78fad81a340b4ffa8/diff:/var/lib/docker/overlay2/149c4cc0694df96c7db16d2561586855babfb8aecf64bded76f0927832088a14/diff:/var/lib/docker/overlay2/af03e84ab6d91147e854a18e9a3b9a88e5cedf1379874a8bdd1fe5b0c49859b7/diff:/var/lib/docker/overlay2/b90ed9a56337388a5059642831c2b837a4e6a1ddc8d85734a457ca4663dd44f4/diff:/var/lib/docker/overlay2/165818c70fd8be3d382ed39346619d17ab30e2b41f69c76d9a1d056b1ce83fd4/diff:/var/lib/docker/overlay2/429ca5c08c28428872c5890585b74fbfd3adc40268c8d8c86709892e317bfffd/diff:/var/lib/docker/overlay2/628db4aa7fa5b3de5990ba1344cce8216f836f6875d67812e4f7304e943d27a5/diff:/var/lib/docker/overlay2/d4eabcf2155e7007f3851b9e65ed17dfa174130891501e528f7c478457550c8c/diff:/var/lib/docker/overlay2/cd1cb30fd2a275a0b6f4a728baebfbbe4a54cddbd26de66457edef9fcd10c4cf/diff:/var/lib/docker/overlay2/ac92a1ae206b0e071cc1f41b7e3e0d693bc59118eef8f77c9718606390d94bfd/diff:/var/lib/docker/overlay2/341881bb418ad7a953a47f22fe488c981b02f3940e3d03bfa3b31d7661e10042/diff:/var/lib/docker/overlay2/13e334357d143c489f48b1e120d64a5369e0b7f6b14883f72322a7a5952e619c/diff:/var/lib/docker/overlay2/cc68a87e8796838d763c48d5f7e03d8f44f6baa857cca38025ebd593c576e42c/diff:/var/lib/docker/overlay2/343fa835becc2bbcdfd5b3187c3ff8c413534d2a8e299cfb3a404268da869ef1/diff:/var/lib/docker/overlay2/c252451b0fb430ff5e8e579e4ceb318a9d9c27ff44e820e84097e0fa63fe40ea/diff:/var/lib/docker/overlay2/0e689b8e677e0f672b30e40b563e1b9c5e22f0688af94bb6bd96569e68512ca3/diff",
                "MergedDir": "/var/lib/docker/overlay2/46ba984ad04516c2ab4713953ed8971f2dafa9deea06860385d82927f6d3b27a/merged",
                "UpperDir": "/var/lib/docker/overlay2/46ba984ad04516c2ab4713953ed8971f2dafa9deea06860385d82927f6d3b27a/diff",
                "WorkDir": "/var/lib/docker/overlay2/46ba984ad04516c2ab4713953ed8971f2dafa9deea06860385d82927f6d3b27a/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:9f32931c9d28f10104a8eb1330954ba90e76d92b02c5256521ba864feec14009",
                "sha256:dbf2c0f42a39b60301f6d3936f7f8adb59bb97d31ec11cc4a049ce81155fef89",
                "sha256:02473afd360bd5391fa51b6e7849ce88732ae29f50f3630c3551f528eba66d1e",
                "sha256:821a02b36140a76665ab12d388b4f51fb4bb445b25da16b5e0e13bf20ca738b2",
                "sha256:59253cfd8184f8dc73b447568faf37d0c8e6a22f1e14de85291ff11e8c610d05",
                "sha256:91d6c14923bf0b01bf98d928a59b0a85a995038b3a70ff610444ab673baaaa8f",
                "sha256:ac1e953b3c974540963b13c88c9d3fbb5d741885f1bdac43988e592fcbbe26af",
                "sha256:2aaf3bcdf662d07ce2f549c870d541e05984f523e72580b94bba24ae204e2325",
                "sha256:a0bdd760c1f271815ddf80943649e2585903b31901041f2a87bdf966eabe7d90",
                "sha256:4ae40e4d8dd18df73a132a1322e87b51dc701f87a701d025ed7e9f1dcd65eeb3",
                "sha256:49d0bbc4901845f15d67f5ff375aceeda77f9819dd7d23ac72d67c13a3272261",
                "sha256:06f71822f920913612fb68c13b6a7f5643f5a97c51d3aa9e1b50754c3e68bf1a",
                "sha256:f9c9a395abc0338da44ace41c1c584c1d1b926a4fb9c2bb4536fdc46e8ad9053",
                "sha256:ce5a449366a27170784d9ba7aa8392c1c0a2795dfbe35f9df41fd8ebbe10bdcb",
                "sha256:a166bb0e79c4f7fcc0289ed701e773177f1ad43a24597d8c795e5076ca967c66",
                "sha256:8384712ce9ce08c5cc159e11558bea9c781cdcb6edb688c685f9df682040edca",
                "sha256:d484232add1b43d4660917f2d717226977d50d7238467699b9d84fc1f366466f",
                "sha256:100e3a569b3f3bece3dc1d9b99b9a6658185e5de593e428ec0593472b5403dc9",
                "sha256:38d142688e1fc52f7a2d6f22690ac1525853457c8be37e32a4471b8784336fbb",
                "sha256:a5770b3399e8288b295b4c79d55329bd5069f7a0fa897ba33cb650085f1c3b4b",
                "sha256:a57923eedd30a6ce87942392f32d9b5ae6f97857c034cab4bca4c497599130b9",
                "sha256:c85452e055ebbf78fb44e02e177a45dcb4dde64ccce7a37903ecfe2b79d3624f",
                "sha256:38b21795fadc82fae60e5feecf84984b0a5083378a860805cf845e5008381da9",
                "sha256:304c31969426ef9108661862af223bcb8d087171485bf0426a6012d3c990b4a6",
                "sha256:53b60e261554a69513809a9be7afbda4347885cbfefbf559139ede1f24a85728",
                "sha256:48e040a9c8ca1671d6efa7973162be5aede6263c85b7d49f6c7bc2c69eb048d0",
                "sha256:c12ff145678cb76a934360d57c223958b67542398af11814e0f5d67d0b68e1e0",
                "sha256:1924a4efdc0f807d461ccc23666ce18b4bf36b73994324960d7a9bf8e4975f52",
                "sha256:09b5c6ed3c258289729068387e5e5cb4728ed3d35b2fbe4515dbc196c29160a3",
                "sha256:e3dbfdda44ed1089a01abcbd6872ea7a1a3918a640c16fa9d5367432f9e05397",
                "sha256:facb0d81d5262daaef8845edccbf5d5dc4acf89778a02a8fa3ce329b05373541",
                "sha256:87a50fb2f87c54f0592c737179963f498bcbe883ba00301dce558f9d97592717",
                "sha256:5f2d53ea5d14601171e77ac403f263e377ae6e9064d7f41e43ecf410035fa1f6",
                "sha256:741983d56ded233734ebd4720dbd446ed8b5ba5b8fac8053db5ee1c34e56adc1",
                "sha256:5c43519ef0ce6c1337a2cf1adb70f3075593d4bd60efef42d715b0ac2fe927a4",
                "sha256:8b6e8ab54d37730eac41127b264fdac5f0ebb43997367c4a1674332f5319b271"
            ]
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]
medyagh commented 3 years ago

here is the SHA for Mac X86

$ docker image inspect gcr.io/k8s-minikube/kicbase:v0.0.20
[
    {
        "Id": "sha256:c6f4fc187bc15575face2a1d7ac05431f861a21ead31664e890c346c57bc1997",
        "RepoTags": [
            "kicbase/stable:v0.0.20",
            "gcr.io/k8s-minikube/kicbase:v0.0.20"
        ],
        "RepoDigests": [
            "kicbase/stable@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6",
            "gcr.io/k8s-minikube/kicbase@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6"
        ],
        "Parent": "",
        "Comment": "buildkit.dockerfile.v0",
        "Created": "2021-04-09T19:15:51.394149587Z",
        "Container": "",
        "ContainerConfig": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": null,
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "DockerVersion": "",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "22/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=docker"
            ],
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/local/bin/entrypoint",
                "/sbin/init"
            ],
            "OnBuild": null,
            "Labels": null,
            "StopSignal": "SIGRTMIN+3"
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 1085338534,
        "VirtualSize": 1085338534,
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/fd97d35c695deb51f10f36213f560932b289c3f4ad0d5270e1e89cf5e279a10b/diff:/var/lib/docker/overlay2/057029d18485071a8e5471dc2a8262986c2df88ab76d807eb1dbbfb5fa342e9d/diff:/var/lib/docker/overlay2/ccaecbebeef7da08f01b1d90c0a9cfd8928c379525e9f3d1156768817d3b816d/diff:/var/lib/docker/overlay2/0f928094c1bd8ee23d2a66d215fe3af5aade4598db69b6d0f97983ea4e9a2e5d/diff:/var/lib/docker/overlay2/9c4dd779a8e136b60875f57a15fd3ce06cb1e10aa3408225e4130ff677da099f/diff:/var/lib/docker/overlay2/4574d89fba47c48c9bad1188f68a04237f6fb26e30af222071d77d835a7b4cc7/diff:/var/lib/docker/overlay2/c03cb752967fbdc12e29a8d933be3361b3b9469fe0be8977e885554bd30faa14/diff:/var/lib/docker/overlay2/62fab9392ba3c00fed21a11d0ec546092f808377f478209d96ea8fe4d797554f/diff:/var/lib/docker/overlay2/bb90a7cbe6f41d2bba088dd595afe05bc464fb0994032c5a532389457739c5cf/diff:/var/lib/docker/overlay2/f44c9be4cdc2ad8d58ab79b94886a523fa67a9f104e27732ce59083f59851b67/diff:/var/lib/docker/overlay2/e70d18197fdd1d2a286d60ec29c1f3f751e3ced138ed0f42e9d7f49443e40565/diff:/var/lib/docker/overlay2/eb86a19ac6abbdbe3529fa7d3ec1e42d64216b0f520a3bffefe97ef125bdbc04/diff:/var/lib/docker/overlay2/1f80e7d7259f4bbb1f0b8e4c80eaa1487d0a494a997ad6be40d880d03c45cbdc/diff:/var/lib/docker/overlay2/88e7023f7c2ce9ef6024dc933966e61380986b7c7fc0eb2ec175199bf3676d30/diff:/var/lib/docker/overlay2/cc3cea749cb7d7b624d66f3234e1fd51998f545f72c6851c1c3603894a5c07b3/diff:/var/lib/docker/overlay2/51f339120c217db8d8a04da0136429171e7ced9b1c39151ea2b069be2692b778/diff:/var/lib/docker/overlay2/2cf5cacdc19d50ffd0708acffe9e74722448b10003575df8008117b50294a3aa/diff:/var/lib/docker/overlay2/2779ec9116f6c2f0a88865ea0383f4933ef76661dfc41e381ab15a6f74c03767/diff:/var/lib/docker/overlay2/96bdf8545284dee53e439bc7501ba382680e14a4a2b334ab65e4326d7756ed9b/diff:/var/lib/docker/overlay2/71cb9694f629d14328c1953c668e08853ab8c483f63bcecd38910601390c23f2/diff:/var/lib/docker/overlay2/8661d1824657b6bb2e4c3acb10007fe53c240b8101593049dc28868dd515654c/diff:/var/lib/docker/overlay2/2d12cdac7027f6a5b3523a5042ced9dc6ec9152cbacb05eefc3f3ba21fb2188c/diff:/var/lib/docker/overlay2/115972c93aa0c78812f1287a24b64bd8a3d7688640656cbbb80a00fd2d89682a/diff:/var/lib/docker/overlay2/e11197017c740331daf89cdee90bcaa12fcb2629ffe97f7f153664641107d04c/diff:/var/lib/docker/overlay2/a3f0bfb0ee18b63d2f0fc2e77cf68a2546622ac9b470ea9b8f02c818f0573b4c/diff:/var/lib/docker/overlay2/8e6365535eb55680d057450b5ef455df60b6143645c48a1fff75b5772df5acd1/diff:/var/lib/docker/overlay2/3d0ddf371897157fcdb8bb8c675c88dc829ea0f6b878745c6582875157480bda/diff:/var/lib/docker/overlay2/b7e251b37013abae2264688ba252449401d454afc4dd2abb0cfc70239e4c9b2e/diff:/var/lib/docker/overlay2/b2d24eb9f11cf2d083511ae04a4893957d288fc2b7d1bba6cb0cbaf64d879bfd/diff:/var/lib/docker/overlay2/03bac52bfe7bf47ee63c98de6c8cba005a598c87dbe1e83ed9e7c1e25d332b90/diff:/var/lib/docker/overlay2/f682b65c58d46cd6a265477caf894a8c8e0c4240dd0106fbcdec0904189c2500/diff:/var/lib/docker/overlay2/241f105a0b0a05990106b709d57c9ef54bd83091be405cce8c37a16a0da5ed20/diff:/var/lib/docker/overlay2/071b18805f81f5760e7f24e089c70cb7ed5eec97b36c72c1ada77fc96a303590/diff:/var/lib/docker/overlay2/574f19b27a94f6c0c560064921cb1f553582ee4966335171c9364ebe21eb93dc/diff:/var/lib/docker/overlay2/3742fb6ef103f86b0a40c865a06e92c333ed00b561cb194222eabc8b765fbfec/diff",
                "MergedDir": "/var/lib/docker/overlay2/550a7292d03dd8f2c3bb19c7c139547840a1fa126cc3664fca08cead742fedf1/merged",
                "UpperDir": "/var/lib/docker/overlay2/550a7292d03dd8f2c3bb19c7c139547840a1fa126cc3664fca08cead742fedf1/diff",
                "WorkDir": "/var/lib/docker/overlay2/550a7292d03dd8f2c3bb19c7c139547840a1fa126cc3664fca08cead742fedf1/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:9f32931c9d28f10104a8eb1330954ba90e76d92b02c5256521ba864feec14009",
                "sha256:dbf2c0f42a39b60301f6d3936f7f8adb59bb97d31ec11cc4a049ce81155fef89",
                "sha256:02473afd360bd5391fa51b6e7849ce88732ae29f50f3630c3551f528eba66d1e",
                "sha256:821a02b36140a76665ab12d388b4f51fb4bb445b25da16b5e0e13bf20ca738b2",
                "sha256:59253cfd8184f8dc73b447568faf37d0c8e6a22f1e14de85291ff11e8c610d05",
                "sha256:91d6c14923bf0b01bf98d928a59b0a85a995038b3a70ff610444ab673baaaa8f",
                "sha256:ac1e953b3c974540963b13c88c9d3fbb5d741885f1bdac43988e592fcbbe26af",
                "sha256:2aaf3bcdf662d07ce2f549c870d541e05984f523e72580b94bba24ae204e2325",
                "sha256:a0bdd760c1f271815ddf80943649e2585903b31901041f2a87bdf966eabe7d90",
                "sha256:4ae40e4d8dd18df73a132a1322e87b51dc701f87a701d025ed7e9f1dcd65eeb3",
                "sha256:49d0bbc4901845f15d67f5ff375aceeda77f9819dd7d23ac72d67c13a3272261",
                "sha256:06f71822f920913612fb68c13b6a7f5643f5a97c51d3aa9e1b50754c3e68bf1a",
                "sha256:f9c9a395abc0338da44ace41c1c584c1d1b926a4fb9c2bb4536fdc46e8ad9053",
                "sha256:ce5a449366a27170784d9ba7aa8392c1c0a2795dfbe35f9df41fd8ebbe10bdcb",
                "sha256:a166bb0e79c4f7fcc0289ed701e773177f1ad43a24597d8c795e5076ca967c66",
                "sha256:8384712ce9ce08c5cc159e11558bea9c781cdcb6edb688c685f9df682040edca",
                "sha256:d484232add1b43d4660917f2d717226977d50d7238467699b9d84fc1f366466f",
                "sha256:100e3a569b3f3bece3dc1d9b99b9a6658185e5de593e428ec0593472b5403dc9",
                "sha256:38d142688e1fc52f7a2d6f22690ac1525853457c8be37e32a4471b8784336fbb",
                "sha256:a5770b3399e8288b295b4c79d55329bd5069f7a0fa897ba33cb650085f1c3b4b",
                "sha256:a57923eedd30a6ce87942392f32d9b5ae6f97857c034cab4bca4c497599130b9",
                "sha256:c85452e055ebbf78fb44e02e177a45dcb4dde64ccce7a37903ecfe2b79d3624f",
                "sha256:38b21795fadc82fae60e5feecf84984b0a5083378a860805cf845e5008381da9",
                "sha256:304c31969426ef9108661862af223bcb8d087171485bf0426a6012d3c990b4a6",
                "sha256:53b60e261554a69513809a9be7afbda4347885cbfefbf559139ede1f24a85728",
                "sha256:48e040a9c8ca1671d6efa7973162be5aede6263c85b7d49f6c7bc2c69eb048d0",
                "sha256:c12ff145678cb76a934360d57c223958b67542398af11814e0f5d67d0b68e1e0",
                "sha256:1924a4efdc0f807d461ccc23666ce18b4bf36b73994324960d7a9bf8e4975f52",
                "sha256:09b5c6ed3c258289729068387e5e5cb4728ed3d35b2fbe4515dbc196c29160a3",
                "sha256:e3dbfdda44ed1089a01abcbd6872ea7a1a3918a640c16fa9d5367432f9e05397",
                "sha256:facb0d81d5262daaef8845edccbf5d5dc4acf89778a02a8fa3ce329b05373541",
                "sha256:87a50fb2f87c54f0592c737179963f498bcbe883ba00301dce558f9d97592717",
                "sha256:5f2d53ea5d14601171e77ac403f263e377ae6e9064d7f41e43ecf410035fa1f6",
                "sha256:741983d56ded233734ebd4720dbd446ed8b5ba5b8fac8053db5ee1c34e56adc1",
                "sha256:5c43519ef0ce6c1337a2cf1adb70f3075593d4bd60efef42d715b0ac2fe927a4",
                "sha256:8b6e8ab54d37730eac41127b264fdac5f0ebb43997367c4a1674332f5319b271"
            ]
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]
ilya-zuyev commented 3 years ago

It looks like the issue is related to how we import kicbase inside mininikube:

ilyaz@mac --- g/minikube β€Ήmaster* ?β€Ί docker image ls --digests | grep kic                                                                                                                                             1 ↡
ilyaz@mac --- g/minikube β€Ήmaster* ?β€Ί m start                                                                                                                                                                          1 ↡
* minikube v1.19.0 on Darwin 11.2 (arm64)
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
    > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB  100.00% 3.87 MiB
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

ilyaz@mac --- g/minikube β€Ήmaster* ?β€Ί Β» docker image ls --digests | grep kic
gcr.io/k8s-minikube/kicbase   <none>    sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6   c081d4d2d545   12 days ago   995MB
gcr.io/k8s-minikube/kicbase   v0.0.20   <none>                                                                    c6f4fc187bc1   12 days ago   1.09GB

Somehow we have two images - one with tag and one with digest and the other attribute is empty In image.ExistsImageInDaemon we just run docker "images --format "{{.Repository}}:{{.Tag}}@{{.Digest}}" and grep for both tag and digest, so get nothing.

On the other hand, if we just run docker pull the result is different:

ilyaz@mac --- g/minikube β€Ήmaster* ?β€Ί docker pull gcr.io/k8s-minikube/kicbase:v0.0.20                                                                                                                                  1 ↡
v0.0.20: Pulling from k8s-minikube/kicbase
...
Digest: sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
Status: Downloaded newer image for gcr.io/k8s-minikube/kicbase:v0.0.20
gcr.io/k8s-minikube/kicbase:v0.0.20

ilyaz@mac --- g/minikube β€Ήmaster* ?β€Ί Β» docker image ls --digests | grep kic           
gcr.io/k8s-minikube/kicbase   v0.0.20   sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6   c081d4d2d545   12 days ago   995MB
ilya-zuyev commented 3 years ago

not reproducible on aarch64 linux:

jenkins@dfw2-c1:~/src/g/minikube$ out/minikube start
* minikube v1.19.0 on Ubuntu 20.04 (arm64)
* Automatically selected the docker driver. Other choices: ssh, none
* Starting control plane node minikube in cluster minikube
* Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v10-v1...: 514.95 MiB / 514.95 MiB  100.00% 95.28 Mi
* Creating docker container (CPUs=2, Memory=32100MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

jenkins@dfw2-c1:~/src/g/minikube$ docker image ls --digests | grep kic 
gcr.io/k8s-minikube/kicbase        v0.0.20   sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6   c081d4d2d545   12 days ago   995MB

jenkins@dfw2-c1:~/src/g/minikube$ uname -a
Linux dfw2-c1.large.arm.xda-01 5.4.0-40-generic #44-Ubuntu SMP Mon Jun 22 23:59:48 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
jenkins@dfw2-c1:~/src/g/minikube$ 
afbjorklund commented 3 years ago

Hmm, in the current go-containerregistry hacks for daemon.Write we first do a load and then do a pull.

Wonder if we somehow ended up using the wrong platform or something ? Because the tag is wrong.

REPOSITORY                    TAG       IMAGE ID       CREATED       SIZE
gcr.io/k8s-minikube/kicbase   <none>    c081d4d2d545   12 days ago   995MB
gcr.io/k8s-minikube/kicbase   v0.0.20   c6f4fc187bc1   12 days ago   1.09GB

Apparently it ended up tagging the amd64 image... Might have to give arm64 explicitly, on the Mac ?

       pull, err := cli.ImagePull(context.Background(), ref.Name(), types.ImagePullOptions{})
// ImagePullOptions holds information to pull images.
type ImagePullOptions struct {
        All           bool
        RegistryAuth  string // RegistryAuth is the base64 encoded credentials for the registry
        PrivilegeFunc RequestPrivilegeFunc
        Platform      string
}
afbjorklund commented 3 years ago

not reproducible on aarch64 linux

Are you sure ? I thought I saw something similar

βœ…  Download complete!
ubuntu@ubuntu:~$ docker images | grep kicbase
gcr.io/k8s-minikube/kicbase                    <none>    65b6b30608a8   7 weeks ago   984MB
gcr.io/k8s-minikube/kicbase                    v0.0.18   a776c544501a   7 weeks ago   1.08GB

Reproduced after upgrading to master:

ubuntu@ubuntu:~$ minikube version
minikube version: v1.19.0
commit: 24a759fb51f273fd9f8691adac33ad87ebca621d
ubuntu@ubuntu:~$ minikube start --driver=docker --download-only --preload=false
ubuntu@ubuntu:~$ docker images gcr.io/k8s-minikube/kicbase
REPOSITORY                    TAG       IMAGE ID       CREATED       SIZE
gcr.io/k8s-minikube/kicbase   <none>    c081d4d2d545   12 days ago   995MB
gcr.io/k8s-minikube/kicbase   v0.0.20   c6f4fc187bc1   12 days ago   1.09GB
gcr.io/k8s-minikube/kicbase   <none>    65b6b30608a8   7 weeks ago   984MB
gcr.io/k8s-minikube/kicbase   v0.0.18   a776c544501a   7 weeks ago   1.08GB
ubuntu@ubuntu:~$ arch
aarch64
ilya-zuyev commented 3 years ago

Hmm, you're right. Just realized that minikube delete --prune doesn't remove the kicbase image and I had it in local docker before testing.

Yeah, same on Linux after minikube start --driver=docker

jenkins@dfw2-c1:~/src/g/minikube$ docker images gcr.io/k8s-minikube/kicbase  --digests
REPOSITORY                    TAG       DIGEST                                                                    IMAGE ID       CREATED       SIZE
gcr.io/k8s-minikube/kicbase   <none>    sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6   c081d4d2d545   13 days ago   995MB
gcr.io/k8s-minikube/kicbase   v0.0.20   <none>                                                                    c6f4fc187bc1   13 days ago   1.09GB
afbjorklund commented 3 years ago

Yup. Tested with crane, and if you don't add the --platform argument it always pulls amd64.

ubuntu@ubuntu:~$ ./crane pull busybox busybox.tar
ubuntu@ubuntu:~$ docker load -i busybox.tar
67f770da229b: Loading layer [==================================================>]  764.7kB/764.7kB
Loaded image: busybox:latest
ubuntu@ubuntu:~$ docker images | grep busybox
busybox                                        latest    388056c9a683   2 weeks ago   1.23MB
ubuntu@ubuntu:~$ docker run -it busybox
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
standard_init_linux.go:219: exec user process caused: exec format error

So we need to add the Platform to the ImagePullOptions, in go-containerregistry PullImage.

ubuntu@ubuntu:~$ ./crane pull --platform linux/arm64 busybox busybox.tar
ubuntu@ubuntu:~$ docker load -i busybox.tar 
6ab3b9191875: Loading layer [==================================================>]  817.2kB/817.2kB
The image busybox:latest already exists, renaming the old one with ID sha256:388056c9a6838deea3792e8f00705b35b439cf57b3c9c2634fb4e95cfc896de6 to empty string
Loaded image: busybox:latest
ubuntu@ubuntu:~$ docker run -it busybox
/ #