Closed gupf0719 closed 4 months ago
Are you in China by any chance? If so, can you provide the output of minikube start --alsologtostderr
? I believe we are supposed to fall-back to fetching this image from GitHub.
@medyagh - is that description of the fallback behavior accurate?
Are you in China by any chance? If so, can you provide the output of
minikube start --alsologtostderr
? I believe we are supposed to fall-back to fetching this image from GitHub.@medyagh - is that description of the fallback behavior accurate?
how to fall-back to Github?
Our code should automatically fallback to github once the call to GCR fails, which is why we wanted to see more detailed output from minikube start.
Try running minikube start --alsologtostderr
and paste the output here so we can better debug your issue.
Hi, same problem:
``
I0929 14:27:58.941785 6495 start.go:112] virtualization: vbox host
I0929 14:27:58.953340 6495 out.go:109] 😄 minikube v1.13.1 on Ubuntu 20.04
😄 minikube v1.13.1 on Ubuntu 20.04
I0929 14:27:58.955436 6495 notify.go:126] Checking for updates...
I0929 14:27:58.961040 6495 driver.go:287] Setting default libvirt URI to qemu:///system
I0929 14:27:59.129774 6495 docker.go:98] docker version: linux-19.03.12
I0929 14:27:59.131274 6495 docker.go:130] overlay module found
I0929 14:27:59.141495 6495 out.go:109] ✨ Using the docker driver based on user configuration
✨ Using the docker driver based on user configuration
I0929 14:27:59.141614 6495 start.go:246] selected driver: docker
I0929 14:27:59.141622 6495 start.go:653] validating driver "docker" against
preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 65.80 MiB I0929 14:28:06.171994 6495 cache.go:156] failed to download gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b, will try fallback image if available: writing daemon image: error loading image: error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/load?quiet=0": error verifying sha256 checksum; got "79226535bd6445d1af476a2814a9b5e173a2356f3e18618e6572fbc3c4f03fed", want "d51af753c3d3a984351448ec0f85ddafc580680fd6dfce9f4b09fdb367ee1e3e" I0929 14:28:06.185821 6495 cache.go:142] Downloading kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon I0929 14:28:06.185859 6495 image.go:140] Writing kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b to local daemon preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 206.23 MiBI0929 14:28:15.342240 6495 cache.go:156] failed to download kicbase/stable:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b, will try fallback image if available: writing daemon image: error loading image: error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/load?quiet=0": error verifying sha256 checksum; got "d10763921e5d5f9b4f605e4edea70be5ca2c2b3c5b93f6a26bd74773bf88c0b4", want "d51af753c3d3a984351448ec0f85ddafc580680fd6dfce9f4b09fdb367ee1e3e" I0929 14:28:15.342902 6495 cache.go:142] Downloading docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3 to local daemon I0929 14:28:15.343010 6495 image.go:140] Writing docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3 to local daemon preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 211.39 MiBI0929 14:28:15.740807 6495 cache.go:156] failed to download docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.12-snapshot3, will try fallback image if available: GET https://docker.pkg.github.com/v2/kubernetes/minikube/kicbase/manifests/v0.0.12-snapshot3: UNAUTHORIZED: GitHub Docker Registry needs login preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB I0929 14:28:32.945472 6495 preload.go:160] saving checksum for preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 ... I0929 14:28:33.101047 6495 preload.go:177] verifying checksumm of /home/ualter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 ... W0929 14:28:34.053690 6495 cache.go:59] Error downloading preloaded artifacts will continue without preload: verify: checksum of /home/ualter/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 does not match remote checksum (k�S�M��FA�"Ѭ� != �`l����yڑ�P) I0929 14:28:34.055000 6495 cache.go:92] acquiring lock: {Name:mk142d6e6766d5773e72ebe4fa783981952620f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:
} I0929 14:28:34.055359 6495 image.go:168] retrieving image: kubernetesui/metrics-scraper:v1.0.4 I0929 14:28:34.055497 6495 cache.go:92] acquiring lock: {Name:mkb4247b6deb4d1856754559ae1afec63570c224 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.055605 6495 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.19.2 I0929 14:28:34.055703 6495 cache.go:92] acquiring lock: {Name:mk6419ef7ea4849fa8d951745dce5ad75e5e7312 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.055791 6495 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.19.2 I0929 14:28:34.055875 6495 cache.go:92] acquiring lock: {Name:mk56697f8901446b111c41d93edee9511dcd07c2 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.055938 6495 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.19.2 I0929 14:28:34.056010 6495 cache.go:92] acquiring lock: {Name:mkd237761f10adb18e835ac49f01443c819cbedf Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.056072 6495 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.19.2 I0929 14:28:34.056139 6495 cache.go:92] acquiring lock: {Name:mke174c42dcc2efe3e2d7e8140212b2f3c3a01dd Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.056198 6495 image.go:168] retrieving image: k8s.gcr.io/coredns:1.7.0 I0929 14:28:34.056295 6495 cache.go:92] acquiring lock: {Name:mk905b0f1eb01dba8a6eb562f1336d713c200942 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.056356 6495 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.13-0 I0929 14:28:34.056426 6495 cache.go:92] acquiring lock: {Name:mk7cb385eb8eb68dce10e2912658fa0218d7cd6c Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.056482 6495 image.go:168] retrieving image: k8s.gcr.io/pause:3.2 I0929 14:28:34.056558 6495 cache.go:92] acquiring lock: {Name:mk0e3a0d8d72f2e09ac4fee7c55e9f67f6633281 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.056660 6495 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v3 I0929 14:28:34.057279 6495 cache.go:92] acquiring lock: {Name:mke90538c5b5015184ab2393d886306ce17856fb Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.057348 6495 image.go:168] retrieving image: kubernetesui/dashboard:v2.0.3 I0929 14:28:34.058354 6495 profile.go:150] Saving config to /home/ualter/.minikube/profiles/minikube/config.json ... I0929 14:28:34.058463 6495 lock.go:35] WriteFile acquiring /home/ualter/.minikube/profiles/minikube/config.json: {Name:mk20d4f963d71913b43e5bbdb6c1b7c9475f4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel: } E0929 14:28:34.058585 6495 cache.go:177] Error downloading kic artifacts: failed to download kic base image or any fallback image I0929 14:28:34.059663 6495 cache.go:182] Successfully downloaded all kic artifacts I0929 14:28:34.059773 6495 start.go:314] acquiring machines lock for minikube: {Name:mka214439089e40cd899813bbf642c19b1d410f6 Clock:{} Delay:500ms Timeout:10m0s Cancel: } I0929 14:28:34.059821 6495 start.go:318] acquired machines lock for "minikube" in 36.034µs I0929 14:28:34.059840 6495 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:2400 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true} I0929 14:28:34.059884 6495 start.go:127] createHost starting for "" (driver="docker") I0929 14:28:34.188366 6495 out.go:109] 🔥 Creating docker container (CPUs=2, Memory=2400MB) ... 🔥 Creating docker container (CPUs=2, Memory=2400MB) ... I0929 14:28:34.188687 6495 start.go:164] libmachine.API.Create for "minikube" (driver="docker") I0929 14:28:34.188871 6495 client.go:165] LocalClient.Create starting I0929 14:28:34.188941 6495 main.go:115] libmachine: Creating CA: /home/ualter/.minikube/certs/ca.pem I0929 14:28:34.060461 6495 image.go:176] daemon lookup for kubernetesui/dashboard:v2.0.3: Error response from daemon: reference does not exist I0929 14:28:34.111179 6495 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v3: Error response from daemon: reference does not exist I0929 14:28:34.111352 6495 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist I0929 14:28:34.111468 6495 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist I0929 14:28:34.111565 6495 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist I0929 14:28:34.111655 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.19.2: Error response from daemon: reference does not exist I0929 14:28:34.111751 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.19.2: Error response from daemon: reference does not exist I0929 14:28:34.111838 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.19.2: Error response from daemon: reference does not exist I0929 14:28:34.111922 6495 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.19.2: Error response from daemon: reference does not exist I0929 14:28:34.112006 6495 image.go:176] daemon lookup for kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist I0929 14:28:34.636861 6495 main.go:115] libmachine: Creating client certificate: /home/ualter/.minikube/certs/cert.pem I0929 14:28:34.648799 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3 I0929 14:28:34.812905 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 I0929 14:28:34.839944 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2 I0929 14:28:34.840416 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2 I0929 14:28:34.840600 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2 I0929 14:28:34.847028 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 I0929 14:28:34.870246 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2 I0929 14:28:34.871577 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2 I0929 14:28:35.173888 6495 cli_runner.go:110] Run: docker ps -a --format {{.Names}} I0929 14:28:35.602818 6495 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0929 14:28:35.649772 6495 cache.go:129] /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists I0929 14:28:35.650410 6495 cache.go:81] cache image "k8s.gcr.io/pause:3.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 1.59398104s I0929 14:28:35.650433 6495 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /home/ualter/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded I0929 14:28:36.215582 6495 oci.go:101] Successfully created a docker volume minikube I0929 14:28:36.216584 6495 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib I0929 14:28:36.893987 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3 I0929 14:28:37.273833 6495 cache.go:81] cache image "k8s.gcr.io/kube-scheduler:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2" took 3.218129806s E0929 14:28:37.274363 6495 cache.go:63] save image to file "k8s.gcr.io/kube-scheduler:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.19.2" failed: write: error verifying sha256 checksum; got "5fcd5f2b9686bb50953f382d1cfc9affd72eb6fc4f47dad7f33b31c3407002e8", want "a84ff2cd01b7f36e94f385564d1f35b2e160c197fa58cfd20373accf17b34b5e" I0929 14:28:38.139280 6495 cache.go:134] opening: /home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4 I0929 14:28:38.296904 6495 cache.go:81] cache image "gcr.io/k8s-minikube/storage-provisioner:v3" -> "/home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3" took 4.240347041s E0929 14:28:38.297411 6495 cache.go:63] save image to file "gcr.io/k8s-minikube/storage-provisioner:v3" -> "/home/ualter/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v3" failed: write: unexpected EOF I0929 14:28:38.297534 6495 cache.go:81] cache image "k8s.gcr.io/kube-controller-manager:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2" took 4.241665384s E0929 14:28:38.297547 6495 cache.go:63] save image to file "k8s.gcr.io/kube-controller-manager:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.19.2" failed: write: Get "https://storage.googleapis.com/eu.artifacts.k8s-artifacts-prod.appspot.com/containers/images/sha256:6611976957bfc0e5b65d5d47e4f32015f2991ce8ed5ed5401ae37b019881fa2c": unexpected EOF I0929 14:28:38.297607 6495 cache.go:81] cache image "k8s.gcr.io/kube-proxy:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2" took 4.242121142s E0929 14:28:38.297617 6495 cache.go:63] save image to file "k8s.gcr.io/kube-proxy:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.19.2" failed: write: unexpected EOF I0929 14:28:38.297653 6495 cache.go:81] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.241518812s E0929 14:28:38.297661 6495 cache.go:63] save image to file "k8s.gcr.io/coredns:1.7.0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" failed: write: unexpected EOF I0929 14:28:38.297958 6495 cache.go:81] cache image "k8s.gcr.io/kube-apiserver:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2" took 4.241954662s E0929 14:28:38.300156 6495 cache.go:63] save image to file "k8s.gcr.io/kube-apiserver:v1.19.2" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.19.2" failed: write: unexpected EOF I0929 14:28:38.324046 6495 cache.go:81] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 4.267738272s E0929 14:28:38.338377 6495 cache.go:63] save image to file "k8s.gcr.io/etcd:3.4.13-0" -> "/home/ualter/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" failed: write: unexpected EOF I0929 14:28:46.348295 6495 cache.go:81] cache image "kubernetesui/metrics-scraper:v1.0.4" -> "/home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4" took 12.293304673s E0929 14:28:46.351540 6495 cache.go:63] save image to file "kubernetesui/metrics-scraper:v1.0.4" -> "/home/ualter/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.4" failed: write: error verifying sha256 checksum; got "8ca3615233491836e042e243eeb62bbf93c8fe6c5876b57aadb639cfe77d3adb", want "1f8ea7f93b39dd928d9ce4eb9683058b8aac4434735003fe332c4dde92e3dbd3" I0929 14:28:48.782260 6495 cache.go:81] cache image "kubernetesui/dashboard:v2.0.3" -> "/home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3" took 14.724984758s E0929 14:28:48.787589 6495 cache.go:63] save image to file "kubernetesui/dashboard:v2.0.3" -> "/home/ualter/.minikube/cache/images/kubernetesui/dashboard_v2.0.3" failed: write: error verifying sha256 checksum; got "3ad3d7dd634b05c78d5fa2543ae68ff6e75fd71339c3264a69b0a0788725557e", want "d5ba0740de2a1168051342cb28dadfd73e356f41134ad7656f4fb4c7995325eb" I0929 14:28:49.491893 6495 cli_runner.go:152] Completed: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: (13.274716988s) I0929 14:28:49.491951 6495 client.go:168] LocalClient.Create took 15.303067591s I0929 14:28:51.498897 6495 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0929 14:28:51.498969 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:28:51.580099 6495 retry.go:30] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:51.871968 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:28:52.061500 6495 retry.go:30] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:52.604832 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:28:52.717989 6495 retry.go:30] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:53.386394 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0929 14:28:53.572423 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube
W0929 14:28:53.572546 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:53.572574 6495 start.go:130] duration metric: createHost completed in 19.51268252s I0929 14:28:53.572582 6495 start.go:81] releasing machines lock for "minikube", held for 19.512753378s W0929 14:28:53.572606 6495 start.go:377] error starting host: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum bc07fdd7ade1: Verifying Checksum docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf. See 'docker run --help'. I0929 14:28:53.573238 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:28:53.683018 6495 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube W0929 14:28:53.683217 6495 out.go:145] 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum bc07fdd7ade1: Verifying Checksum docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf. See 'docker run --help'.
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum bc07fdd7ade1: Verifying Checksum docker: filesystem layer verification failed for digest sha256:bc07fdd7ade1f4873668f3d205fbb3ea9fc3689cd5497ba23e35a1e4bce697bf. See 'docker run --help'.
I0929 14:28:53.683266 6495 start.go:392] Will try again in 5 seconds ...
I0929 14:28:58.684581 6495 start.go:314] acquiring machines lock for minikube: {Name:mka214439089e40cd899813bbf642c19b1d410f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:
stderr: Error: No such container: minikube I0929 14:28:58.795634 6495 fix.go:112] machineExists: false. err=machine does not exist I0929 14:28:58.822458 6495 out.go:109] 🤷 docker "minikube" container is missing, will recreate. 🤷 docker "minikube" container is missing, will recreate. I0929 14:28:58.822490 6495 delete.go:124] DEMOLISHING minikube ... I0929 14:28:58.822598 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} W0929 14:28:58.900872 6495 stop.go:75] unable to get state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:58.902201 6495 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:58.902695 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:28:59.000495 6495 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:59.000649 6495 cli_runner.go:110] Run: docker container inspect -f {{.Id}} minikube I0929 14:28:59.150118 6495 kic.go:275] could not find the container minikube to remove it. will try anyways I0929 14:28:59.150435 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} W0929 14:28:59.287721 6495 oci.go:82] error getting container status, will try to delete anyways: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:28:59.288604 6495 cli_runner.go:110] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0" I0929 14:28:59.408129 6495 oci.go:585] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:00.409769 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:00.504159 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:00.504183 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:00.504199 6495 retry.go:30] will retry after 468.857094ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:00.973753 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:01.050559 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:01.051730 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:01.051751 6495 retry.go:30] will retry after 693.478123ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:01.779802 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:01.869558 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:01.869772 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:01.869801 6495 retry.go:30] will retry after 1.335175957s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:03.207098 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:03.282909 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:03.283005 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:03.283023 6495 retry.go:30] will retry after 954.512469ms: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:04.245969 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:04.339093 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:04.339274 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:04.339295 6495 retry.go:30] will retry after 1.661814363s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:06.003572 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:06.091355 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:06.092090 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:06.092696 6495 retry.go:30] will retry after 2.266618642s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:08.367039 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:08.443372 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:08.443526 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:08.443566 6495 retry.go:30] will retry after 4.561443331s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:13.011490 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:13.087924 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:13.088035 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:13.088055 6495 retry.go:30] will retry after 8.67292976s: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:21.763573 6495 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I0929 14:29:21.889097 6495 oci.go:597] temporary error verifying shutdown: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:21.889208 6495 oci.go:599] temporary error: container minikube status is but expect it to be exited I0929 14:29:21.889236 6495 oci.go:86] couldn't shut down minikube (might be okay): verify shutdown: couldn't verify cointainer is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube
I0929 14:29:21.889309 6495 cli_runner.go:110] Run: docker rm -f -v minikube
W0929 14:29:21.968769 6495 delete.go:139] delete failed (probably ok)
stderr: Error: No such container: minikube I0929 14:29:43.788380 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:43.868931 6495 retry.go:30] will retry after 267.848952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:44.138748 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:44.290257 6495 retry.go:30] will retry after 495.369669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:44.787366 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:44.887414 6495 retry.go:30] will retry after 690.236584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:45.578535 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0929 14:29:45.658394 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube
W0929 14:29:45.658544 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:45.658564 6495 start.go:130] duration metric: createHost completed in 22.68923508s I0929 14:29:45.658637 6495 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0929 14:29:45.659238 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:45.755825 6495 retry.go:30] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:46.002610 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:46.127517 6495 retry.go:30] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:46.421940 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:46.525414 6495 retry.go:30] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:46.972838 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0929 14:29:47.119769 6495 retry.go:30] will retry after 994.852695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:48.115810 6495 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0929 14:29:48.204084 6495 start.go:258] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube
W0929 14:29:48.204191 6495 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0929 14:29:48.204217 6495 fix.go:56] fixHost completed within 49.519128251s I0929 14:29:48.204230 6495 start.go:81] releasing machines lock for "minikube", held for 49.51932234s W0929 14:29:48.204396 6495 out.go:145] 😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum 6f3edf07f47c: Verifying Checksum bc07fdd7ade1: Verifying Checksum 9335d5e85dc9: Verifying Checksum ad77e393caaf: Verifying Checksum docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2. See 'docker run --help'.
😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum 6f3edf07f47c: Verifying Checksum bc07fdd7ade1: Verifying Checksum 9335d5e85dc9: Verifying Checksum ad77e393caaf: Verifying Checksum docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2. See 'docker run --help'.
I0929 14:29:48.225333 6495 out.go:109]
W0929 14:29:48.227573 6495 out.go:145] ❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum 6f3edf07f47c: Verifying Checksum bc07fdd7ade1: Verifying Checksum 9335d5e85dc9: Verifying Checksum ad77e393caaf: Verifying Checksum docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2. See 'docker run --help'.
❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b' locally sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b: Pulling from k8s-minikube/kicbase d51af753c3d3: Pulling fs layer fc878cd0a91c: Pulling fs layer 6154df8ff988: Pulling fs layer fee5db0ff82f: Pulling fs layer 5af1cb370982: Pulling fs layer 6f3edf07f47c: Pulling fs layer fe50ecc0dda0: Pulling fs layer bc07fdd7ade1: Pulling fs layer 9335d5e85dc9: Pulling fs layer 79a32115d2cd: Pulling fs layer ad77e393caaf: Pulling fs layer da3861d3792f: Pulling fs layer 3b8d4e4f5c3c: Pulling fs layer 450ef1e1251c: Pulling fs layer 20ba60eac76f: Pulling fs layer 79ddc9b35b83: Pulling fs layer b46dc25c7350: Pulling fs layer 3d82425d9581: Pulling fs layer 282c83787e4c: Pulling fs layer 6db34ffebc70: Pulling fs layer 4e220af36774: Pulling fs layer a34b4acb4482: Pulling fs layer fd68ba8cf361: Pulling fs layer 2ac166461221: Pulling fs layer 668caf51a011: Pulling fs layer 2b434031e1fa: Pulling fs layer 9c5e658b1181: Pulling fs layer dfcb7e7f8f59: Pulling fs layer 20ba60eac76f: Waiting 79ddc9b35b83: Waiting b46dc25c7350: Waiting 3d82425d9581: Waiting 282c83787e4c: Waiting 6db34ffebc70: Waiting 4e220af36774: Waiting a34b4acb4482: Waiting fd68ba8cf361: Waiting 2ac166461221: Waiting 668caf51a011: Waiting 2b434031e1fa: Waiting 9c5e658b1181: Waiting dfcb7e7f8f59: Waiting bc07fdd7ade1: Waiting 9335d5e85dc9: Waiting fee5db0ff82f: Waiting 5af1cb370982: Waiting ad77e393caaf: Waiting da3861d3792f: Waiting 3b8d4e4f5c3c: Waiting 450ef1e1251c: Waiting 79a32115d2cd: Waiting 6f3edf07f47c: Waiting fe50ecc0dda0: Waiting 6154df8ff988: Verifying Checksum 6154df8ff988: Download complete fc878cd0a91c: Verifying Checksum fc878cd0a91c: Download complete fee5db0ff82f: Verifying Checksum fee5db0ff82f: Download complete 5af1cb370982: Verifying Checksum 5af1cb370982: Download complete d51af753c3d3: Verifying Checksum 6f3edf07f47c: Verifying Checksum bc07fdd7ade1: Verifying Checksum 9335d5e85dc9: Verifying Checksum ad77e393caaf: Verifying Checksum docker: filesystem layer verification failed for digest sha256:ad77e393caaf4f63b6323ab36f96d36f446e67e7c0cfc8358c9fa2b6e1c0def2. See 'docker run --help'.
W0929 14:29:48.233977 6495 out.go:145]
W0929 14:29:48.234037 6495 out.go:145] 😿 If the above advice does not help, please let us know: 😿 If the above advice does not help, please let us know: W0929 14:29:48.234070 6495 out.go:145] 👉 https://github.com/kubernetes/minikube/issues/new/choose 👉 https://github.com/kubernetes/minikube/issues/new/choose I0929 14:29:48.253091 6495 out.go:109]` ``
same problem
same issue
If you are in China , you can
use minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
I have the same problem
If you are in China , you can use
minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
Doesn't work
Work!!!
(base) ➜ ~ minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' 😄 Darwin 11.2.1 上的 minikube v1.17.1 ✨ 根据现有的配置文件使用 docker 驱动程序 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🤷 docker "minikube" container is missing, will recreate. 🔥 Creating docker container (CPUs=2, Memory=3890MB) ... 🐳 正在 Docker 20.10.2 中准备 Kubernetes v1.20.2… ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
@baymax55 Can you expand on what's not working, are you still getting the same error as before or are you getting a new error now?
I'm closing this issue as this issue should be fixed in recent releases of minikube ( v1.17+). If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it by replying /reopen
If someone sees a similar issue to this one, please re-open it as replies to closed issues are unlikely to be noticed.
Thank you for opening the issue!
/reopen Same on Fedora 33, x86_64, minikube-1.18.1
@vonbrand: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@vonbrand what error are you getting exactly?
❯ minikube start 😄 minikube v1.20.0 on Darwin 11.4 ▪ KUBECONFIG=/Users/chrisryu/.kube/config 🆕 Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ...
gcr.io/k8s-minikube/kicbase...: 312.94 MiB / 312.94 MiB 100.00% 8.64 MiB gcr.io/k8s-minikube/kicbase...: 312.94 MiB / 312.94 MiB 100.00% 7.40 MiB index.docker.io/kicbase/sta...: 358.10 MiB / 358.10 MiB 100.00% 20.76 Mi index.docker.io/kicbase/sta...: 358.10 MiB / 358.10 MiB 100.00% 5.08 MiB ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3, but successfully downloaded kicbase/stable:v0.0.22 as a fallback image E0606 21:58:39.655738 21709 cache.go:189] Error downloading kic artifacts: failed to download kic base image or any fallback image 🤷 docker "minikube" container is missing, will recreate. 🔥 Creating docker container (CPUs=2, Memory=1990MB) ... 🤦 StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers). See 'docker run --help'.
🤷 docker "minikube" container is missing, will recreate. 🔥 Creating docker container (CPUs=2, Memory=1990MB) ... 😿 Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers). See 'docker run --help'.
❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125 stdout:
stderr: Unable to find image 'kicbase/stable:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers). See 'docker run --help'.
╭────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please attach the following file to the GitHub issue: │ │ - /Users/chrisryu/.minikube/logs/lastStart.txt │ │ │ ╰────────────────────────────────────────────────────────────────────╯
~/workspace/Ausbildung on chris-ryu/issue161 *20 ?1
I get the same thing with podman+minikube:
@baymax55 Can you expand on what's not working, are you still getting the same error as before or are you getting a new error now?
when I use the latest version for minikube, there is no such problem, thanks
Based on the error message posted here, it looks like there's a strange caching bug for kicbase when restarting an existing cluster. Can @bhundven or @chris-ryu confirm that this is still happening on minikube 1.22.0? For fresh starts, it looks like the fallback mechanism is working as intended.
If you are in China , you can use
minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
not working anymore. Now the image repository is 404 page not found
If you are in China , you can use
minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
not working anymore. Now the image repository is 404 page not found
No no no , it still works for me right now , 2021.11.28.
Docker repository url "registry.cn-hangzhou.aliyuncs.com/google_containers" cannot respond to http request , but still works when pull it with "docker pull" .
If you are in China , you can use
minikube start --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
In my case, I set the HTTPS_PROXY
on the shell, it doesn't work.
but when I turn on set as system proxy
option in the clash
, it works for me. :(
(base) minikube start --driver=podman 😄 minikube v1.25.1 on Ubuntu 20.04 ✨ Using the podman driver based on user configuration 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 💾 Downloading Kubernetes v1.23.1 preload ...
preloaded-images-k8s-v16-v1...: 281.97 MiB / 504.42 MiB 55.90% 86.20 KiB preloaded-images-k8s-v16-v1...: 504.42 MiB / 504.42 MiB 100.00% 585.62 K E0122 11:53:53.241564 1042335 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426 🔥 Creating podman container (CPUs=2, Memory=4000MB) ... 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.29 -d /var/lib: exit status 125 stdout:
stderr: Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.29... Error: initializing source docker://gcr.io/k8s-minikube/kicbase:v0.0.29: pinging container registry gcr.io: Get "https://gcr.io/v2/": dial tcp 64.233.189.82:443: i/o timeout
🔄 Restarting existing podman container for "minikube" ... 😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125 stdout:
stderr: Error: error inspecting object: no such container minikube
❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: sudo -n podman container inspect -f minikube: exit status 125 stdout:
stderr: Error: error inspecting object: no such container minikube
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run minikube logs --file=logs.txt
and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
It works for me!!!!
(base) minikube start --driver=podman --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' 😄 minikube v1.25.1 on Ubuntu 20.04 ✨ Using the podman driver based on user configuration ✅ Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ...
registry.cn-hangzhou.aliyun...: 378.98 MiB / 378.98 MiB 100.00% 5.64 MiB E0122 12:17:32.855630 1050848 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426 🔥 Creating podman container (CPUs=2, Memory=4000MB) ... 🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ... ▪ kubelet.housekeeping-interval=5m ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔎 Verifying Kubernetes components... ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
im not in china and I get the same issue...
% minikube start --driver=podman —-container-runtime=cri-o --alsologtostderr
I0124 08:05:53.532101 20382 out.go:297] Setting OutFile to fd 1 ...
I0124 08:05:53.532209 20382 out.go:349] isatty.IsTerminal(1) = true
I0124 08:05:53.532212 20382 out.go:310] Setting ErrFile to fd 2...
I0124 08:05:53.532216 20382 out.go:349] isatty.IsTerminal(2) = true
I0124 08:05:53.532278 20382 root.go:315] Updating PATH: /Users/danield/.minikube/bin
I0124 08:05:53.532480 20382 out.go:304] Setting JSON to false
I0124 08:05:53.563745 20382 start.go:112] hostinfo: {"hostname":"****","uptime":604198,"bootTime":1642403755,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.1","kernelVersion":"21.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"****"}
W0124 08:05:53.563881 20382 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0124 08:05:53.585159 20382 out.go:176] 😄 minikube v1.25.1 on Darwin 12.1 (arm64)
😄 minikube v1.25.1 on Darwin 12.1 (arm64)
I0124 08:05:53.585293 20382 notify.go:174] Checking for updates...
I0124 08:05:53.585535 20382 config.go:176] Loaded profile config "minikube": Driver=podman, ContainerRuntime=docker, KubernetesVersion=v1.23.1
I0124 08:05:53.585831 20382 driver.go:344] Setting default libvirt URI to qemu:///system
I0124 08:05:53.758862 20382 podman.go:121] podman version: 3.4.4
I0124 08:05:53.779200 20382 out.go:176] ✨ Using the podman (experimental) driver based on existing profile
✨ Using the podman (experimental) driver based on existing profile
I0124 08:05:53.779218 20382 start.go:280] selected driver: podman
I0124 08:05:53.779221 20382 start.go:795] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.*.*.*/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0124 08:05:53.779316 20382 start.go:806] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0124 08:05:53.779334 20382 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
I0124 08:05:53.779476 20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:53.877773 20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:247001088 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 21.91s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:53.878118 20382 cni.go:93] Creating CNI manager for ""
I0124 08:05:53.878132 20382 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0124 08:05:53.878136 20382 start_flags.go:300] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.*.*.*/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0124 08:05:53.896822 20382 out.go:176] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I0124 08:05:53.896891 20382 cache.go:120] Beginning downloading kic base image for podman with docker
I0124 08:05:53.935207 20382 out.go:176] 🚜 Pulling base image ...
🚜 Pulling base image ...
I0124 08:05:53.935255 20382 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker
I0124 08:05:53.935288 20382 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
I0124 08:05:53.935391 20382 preload.go:148] Found local preload: /Users/danield/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4
I0124 08:05:53.935413 20382 cache.go:57] Caching tarball of preloaded images
I0124 08:05:53.935466 20382 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
I0124 08:05:53.935491 20382 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory, skipping pull
I0124 08:05:53.935498 20382 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in cache, skipping pull
I0124 08:05:53.935502 20382 preload.go:174] Found /Users/danield/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0124 08:05:53.935505 20382 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b as a tarball
I0124 08:05:53.935508 20382 cache.go:60] Finished verifying existence of preloaded tar for v1.23.1 on docker
I0124 08:05:53.935583 20382 profile.go:147] Saving config to /Users/danield/.minikube/profiles/minikube/config.json ...
E0124 08:05:53.936168 20382 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
I0124 08:05:53.936174 20382 cache.go:208] Successfully downloaded all kic artifacts
I0124 08:05:53.936189 20382 start.go:313] acquiring machines lock for minikube: {Name:mk04264be43adb0b61089022ae9ebb8e555690a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 08:05:53.936242 20382 start.go:317] acquired machines lock for "minikube" in 26.875µs
I0124 08:05:53.936255 20382 start.go:93] Skipping create...Using existing machine configuration
I0124 08:05:53.936260 20382 fix.go:55] fixHost starting:
I0124 08:05:53.936508 20382 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
W0124 08:05:54.021228 20382 cli_runner.go:180] podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0124 08:05:54.021282 20382 fix.go:108] recreateIfNeeded on minikube: state= err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:54.021304 20382 fix.go:113] machineExists: true. err=unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
W0124 08:05:54.021316 20382 fix.go:134] unexpected machine state, will restart: unknown state "minikube": podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:54.041360 20382 out.go:176] 🔄 Restarting existing podman container for "minikube" ...
🔄 Restarting existing podman container for "minikube" ...
I0124 08:05:54.041663 20382 cli_runner.go:133] Run: podman start minikube
W0124 08:05:54.125588 20382 cli_runner.go:180] podman start minikube returned with exit code 125
I0124 08:05:54.125757 20382 cli_runner.go:133] Run: podman inspect minikube
I0124 08:05:54.209495 20382 errors.go:84] Postmortem inspect ("podman inspect minikube"): -- stdout --
[
{
"Name": "minikube",
"Driver": "local",
"Mountpoint": "/var/home/core/.local/share/containers/storage/volumes/minikube/_data",
"CreatedAt": "2022-01-21T17:17:27.952383322Z",
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"name.minikube.sigs.k8s.io": "minikube"
},
"Scope": "local",
"Options": {}
}
]
-- /stdout --
I0124 08:05:54.209692 20382 cli_runner.go:133] Run: podman logs --timestamps minikube
W0124 08:05:54.290757 20382 cli_runner.go:180] podman logs --timestamps minikube returned with exit code 125
W0124 08:05:54.290796 20382 errors.go:89] Failed to get postmortem logs. podman logs --timestamps minikube :podman logs --timestamps minikube: exit status 125
stdout:
stderr:
Error: channel "123" found, 0-3 supported: lost synchronization with multiplexed stream
I0124 08:05:54.290913 20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:54.384618 20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:246255616 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.46s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.384715 20382 errors.go:106] postmortem podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:246255616 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.46s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.384835 20382 network_create.go:254] running [podman network inspect minikube] to gather additional debugging logs...
I0124 08:05:54.384883 20382 cli_runner.go:133] Run: podman network inspect minikube
I0124 08:05:54.467405 20382 network_create.go:259] output of [podman network inspect minikube]: -- stdout --
[
{
"args": {
"podman_labels": {
"created_by.minikube.sigs.k8s.io": "true"
}
},
"cniVersion": "0.4.0",
"name": "minikube",
"plugins": [
{
"bridge": "cni-podman1",
"hairpinMode": true,
"ipMasq": true,
"ipam": {
"ranges": [
[
{
"gateway": "192.*..*.*",
"subnet": "192.*.*.0/24"
}
]
],
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"type": "host-local"
},
"isGateway": true,
"type": "bridge"
},
{
"capabilities": {
"portMappings": true
},
"type": "portmap"
},
{
"backend": "",
"type": "firewall"
},
{
"type": "tuning"
},
{
"capabilities": {
"aliases": true
},
"domainName": "dns.podman",
"type": "dnsname"
},
{
"capabilities": {
"portMappings": true
},
"type": "podman-machine"
}
]
}
]
-- /stdout --
I0124 08:05:54.467622 20382 cli_runner.go:133] Run: podman system info --format json
I0124 08:05:54.564668 20382 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:245407744 MemTotal:2048376832 OCIRuntime:{Name:crun Package:crun-1.4-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4
commit: 3daded072ef008ef0840e8eccb0b52a7efbd165d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.10-200.fc35.aarch64 Os:linux Rootless:false Uptime:46h 56m 22.66s (Approximately 1.92 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/var/home/core/.config/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/home/core/.local/share/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/user/1000/containers VolumePath:/var/home/core/.local/share/containers/storage/volumes}}
I0124 08:05:54.565294 20382 cli_runner.go:133] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
W0124 08:05:54.647004 20382 cli_runner.go:180] podman container inspect -f {{.NetworkSettings.IPAddress}} minikube returned with exit code 125
I0124 08:05:54.647232 20382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 08:05:54.647290 20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:54.749850 20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:54.830999 20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0124 08:05:54.831108 20382 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:55.109337 20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:55.274690 20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:55.361516 20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0124 08:05:55.361624 20382 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:55.903872 20382 cli_runner.go:133] Run: podman version --format {{.Version}}
I0124 08:05:56.079913 20382 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0124 08:05:56.163869 20382 cli_runner.go:180] podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0124 08:05:56.163991 20382 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
W0124 08:05:56.164019 20382 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
I0124 08:05:56.164032 20382 fix.go:57] fixHost completed within 2.227768375s
I0124 08:05:56.164045 20382 start.go:80] releasing machines lock for "minikube", held for 2.227794s
W0124 08:05:56.164058 20382 start.go:566] error starting host: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
W0124 08:05:56.164186 20382 out.go:241] 🤦 StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
🤦 StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container "minikube"
running Mac OS 12.1 on apple (M1) silicon
On Mac OS X 12.1., with an Intel CPU, it is the same. I got from the script the hint to delete the minikube image after it tried a different IP. I tried, and the output was the following:
$ minikube start --driver=docker
😄 minikube v1.25.1 on Darwin 12.1
❗ Deleting existing cluster minikube with different driver podman due to --delete-on-failure flag set by the user.
💢 Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "podman" driver, which is incompatible with requested "docker" driver.
💡 Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=podman'
$ minikube delete
🔥 Deleting "minikube" in podman ...
🔥 Removing /Users/peter/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
❯ minikube start --driver=podman
😄 minikube v1.25.1 on Darwin 12.1
✨ Using the podman (experimental) driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
E0125 21:50:07.498966 84689 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=1965MB) ...\ 2022/01/25 21:50:58 tcpproxy: for incoming conn 127.0.0.1:49216, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
/ 2022/01/25 21:51:01 tcpproxy: for incoming conn 127.0.0.1:49217, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
- 2022/01/25 21:51:04 tcpproxy: for incoming conn 127.0.0.1:49219, error dialing "192.168.127.2:37237": connect tcp 192.168.127.2:37237: connection was refused
-
...
I switched off all firewall functionality before I tried podman vs. docker vs. footloose etc.
I had issues with Footloose related to systemd-v2 in docker-desktop 4.3; not sure if that relates in any way? From all options I am testing, just docker works for the moment.
I try the following command in my WSL2, and it can work
$ minikube start --network-plugin=cni --cni=calico --driver=docker --base-image "gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531"
😄 minikube v1.25.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.1 preload ...
🤷 docker "minikube" container is missing, will recreate.
🔥 Creating docker container (CPUs=2, Memory=3100MB) ...
🌐 Found network options:
▪ HTTP_PROXY=www-proxy.lmera.ericsson.se:8080
❗ You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
📘 Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
▪ HTTPS_PROXY=www-proxy.lmera.ericsson.se:8080
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ env HTTP_PROXY=www-proxy.lmera.ericsson.se:8080
▪ env HTTPS_PROXY=www-proxy.lmera.ericsson.se:8080
▪ kubelet.housekeeping-interval=5m
kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s kubectl: 44.43 MiB / 44.43 MiB [--------------] 100.00% 7.30 MiB p/s 6.3s kubeadm: 43.12 MiB / 43.12 MiB [---------------] 100.00% 2.10 MiB p/s 21s kubelet: 118.75 MiB / 118.75 MiB [-------------] 100.00% 4.63 MiB p/s 26s ▪ Generating certificates and keys ... ▪ Booting up control plane ... ▪ Configuring RBAC rules ... 🔗 Configuring Calico (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
NOTE: The base image string I got from https://minikube.sigs.k8s.io/docs/commands/start/
same here, running in WSL2. It works after I set the manual proxy configuration in docker desktop.
use docker pull kicbase/stable:v0.0.32
, and minikube start --vm-driver=docker --base-image="kicbase/stable:v0.0.32" --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --kubernetes-version=v1.23.8
, it fixed this question.
If you are in China , try run: minikube start --image-mirror-country='cn' --image-repository='auto'
reconfiguration may fix the issue If you are not in cn
minikube delete --all --purge
minikube start
If you are in China , try run: minikube start --image-mirror-country='cn' --image-repository='auto'
I use mac, it doesn't work for me
rm .minikube folder
Hello, I'm hitting this kind of issue also. I'm not in China but in Europe. I have latest 1.27.1 minikube.
I have tried minikube delete --all --purge
and then minikube start --driver=podman --container-runtime=containerd
My OS is Linux, Manjaro
It looks like some of layers failed to download
Failed, retrying in 1s ... (1/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:55026->216.58.210.144:443: read: connection reset by peer"
Any help is appreciated, thank you!
Here is full log:
😄 minikube v1.27.1 on Arch 22.0.0
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.25.2 preload ...
> preloaded-images-k8s-v18-v1...: 406.52 MiB / 406.52 MiB 100.00% 169.59
E1103 09:40:26.985377 781710 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=7900MB) ...-
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.35 -d /var/lib: exit status 125
stdout:
stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.35...
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
time="2022-11-03T09:42:10+02:00" level=warning msg="Failed, retrying in 1s ... (1/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:55026->216.58.210.144:443: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
time="2022-11-03T09:43:34+02:00" level=warning msg="Failed, retrying in 1s ... (2/3). Error: copying system image from manifest list: reading blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0: Get \"https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0\": read tcp 192.168.68.115:47602->216.58.209.176:443: read: connection reset by peer"
time="2022-11-03T09:43:36+02:00" level=warning msg="Failed, retrying in 1s ... (3/3). Error: initializing source docker://gcr.io/k8s-minikube/kicbase:v0.0.35: Get \"https://gcr.io/v2/token?scope=repository%3Ak8s-minikube%2Fkicbase%3Apull&service=gcr.io\": read tcp 192.168.68.115:52794->64.233.165.82:443: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:b987c1044662ddf8ece45adc09ae43297caa03fbb0cd93807ec33446b5e4d699
Copying blob sha256:56d2f965238ce8ceeb51602b791693e4b0fe4de0caadd22b6d8829ef8d4325a0
Copying blob sha256:5929ee7263605a04298a055a110794e1c1c5d9ae6a1f3af3f35d1f5880757eeb
Copying blob sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f
Copying blob sha256:145be87ba1e7d69f6b2a87da88bb0261eae002db08c503f8f5ebe3927e89dd48
Copying blob sha256:8a5db1d791f6f64e5d26cc93b24f9334bee02a07d0c580e05235a4c095a6b228
Copying blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940
Copying blob sha256:5bc974652c0f55c63f70c9e2e0e0d8f93026643f417590971e46c8ee01843f1b
Copying blob sha256:a8112bdd23a5e0a12c58ecbe50036899a4a05aa713e3543a377e30daf901ab12
Copying blob sha256:b43426408aeaa9e86b262f4d396a66f705a364608821e292bd511a61208b7134
Copying blob sha256:7ca3354f4ad095905f4c513c932c9ff49386caaafc4dfdc0142a7638aae7c8ff
Copying blob sha256:658a871584c730486c914141d2a4dceec9c5f589e2e4d130f2596bdfd4131419
Copying blob sha256:df38ef75fe038e9343adfa187ad7645d9c1e7965a338cfa71a143ee2ea091fbb
Copying blob sha256:8bf897b8cca689bab4155fbbd5f5056d52d00d1fb0db520919e4ea86e16ca38f
Copying blob sha256:5b451afb517d4ae60519ac43958a7ef4826a0bef035852e4898a083cc9c66d7d
Copying blob sha256:23b280ea6107386ed0bb1818c6496c4d0099067dc815e5d4d085ded90a0c2396
Copying blob sha256:73bcc9432e63bc8cc4da01288063f611cb8fcae657397cf5ec9b6e501892c6e2
Copying blob sha256:a63abca9d6b2ec07605bb34d69539e49b37f37d4e0301da2c192bd0addcd2e42
Copying blob sha256:e3fa3541a9c347099e7816f1dc89b67af773d14c83f62bafc47ef5b4309f7596
Copying blob sha256:221a234a2a2ce1fb364f7160b552f7c95c8eb6ca0746be4ba018b70a98839009
Copying blob sha256:eba2f175934d9241479b61f55aed01f369e66e0e6fe23a7840d426afbbd8f237
Copying blob sha256:2576370f94e3ca533df6df14138f87121e08eb7a1e3c7750c0dd4425d121ed3b
Copying blob sha256:66020bad4ace073dce0bf0459c4493e3506a401bbc6a4fce5fc1099ff733ba20
Copying blob sha256:1325375bc883a92c0ba95f661d2fea83471e41e864a496f14594bc9a90fee8fa
Copying blob sha256:ea3a5cb7dc8c93b46d89f267f0b6f1b2a903594018efee231305e14757296333
Copying blob sha256:81f0a1f7717461b3385999b0ac320e1eb2a2e87d94f104768d8598fa52912e9e
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:56b8a253d1c8c859d18bde29451688492524aeb442b9d9879c36de6348c0330f
Copying blob sha256:34dd8e262490eb764c01e4a1a5acfd49c383a868f77e44896b0ad587f2032648
Copying blob sha256:0910ade5e45a8649ffffa04a75f49793f4820d432ade2446c09331caa9450572
Copying blob sha256:c43bb84a7664e3247ed7e6adf837a54932222efe3a46671bb42110ccf754a4d4
Copying blob sha256:cc193acfd3b47ebf9d1f565ffc4e2f6b2d348209430e9fd99d7d08a7dad32cae
Copying blob sha256:9f3348a3008ad23e4d17e0febbf02b4ebfc731a25d5d35f1f594bf0958cf1c2a
Copying blob sha256:c97444bdadc69fd8211785d4e0083a0cfb00521ce6071019c3f85eaa3f9abab6
Copying blob sha256:0ff67ec6b52b7acdad02bafa776637e6af678aeb57c52eb545654d229e3abd01
Copying blob sha256:165fe55aaa5a19372b3f21bc66eae3dd599e875f910b5ab5feb4664b19251fc5
Copying blob sha256:85ef2ba128977a055820dc12310969b1b5f1f36dc4764b13f2d7337bdc58ed28
Copying blob sha256:7a9861c67178034da847abedb707b7d2c0abb8fa093645a19649d49a923534fa
Copying blob sha256:9e6492915687a2a8228a3fe45aeb09a050f89c1f1b9fac690f03e067831e2ca4
Copying blob sha256:c40ddbb9e28a6172dd1a6c49893e74edd154de912924cd44c08271549bc0cb8b
Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying blob sha256:758013b640f0608cf7837128f34e3a5a8a181a57dce9ee7639a128c94ee33076
Error: copying system image from manifest list: reading blob sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940: Get "https://storage.googleapis.com/artifacts.k8s-minikube.appspot.com/containers/images/sha256:bf918f037955ed940fc47e8455a18c74fa0ff7bc1e5b08821b8ed878e3bb1940": read tcp 192.168.68.115:41178->216.58.209.208:443: read: connection reset by peer
🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:
stderr:
Error: no container with name or ID "minikube" found: no such container
❌ Exiting due to GUEST_PROVISION: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:
stderr:
Error: no container with name or ID "minikube" found: no such container
Just ran into this same issue, I'm in Hawaii for the winter and was connected via a Spectrum v6 address.
Switched to my mobile hotspot, an ATT v6 address and everything just worked.
I have had some other sites seem to think I am connected from outside the US when using this spectrum connection here in Hawaii so perhaps there is some bad data trying to map the v6 address to a location?
I am in china , I had the same problem until add --registry-mirror , now it work well,if you are in china , 阿里云镜像服务 may help you
Same issue
Ubuntu 22.04 , minikube: 1.30.1 , docker 23.0.3
I am in india.
use
docker pull kicbase/stable:v0.0.32
, andminikube start --vm-driver=docker --base-image="kicbase/stable:v0.0.32" --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --kubernetes-version=v1.23.8
, it fixed this question.
It looks like the first command unnecessary. Because :
➜ ~ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kicbase/stable v0.0.39 67a4b1138d2d 7 weeks ago 1.05GB
gcr.io/k8s-minikube/kicbase v0.0.39 67a4b1138d2d 7 weeks ago 1.05GB
Thanks this answer, I solved eventually but a little different.Note that I failed twice before, the cuase maybe my proxy connection is not stable, because I am in China. I just set proxy in ~/.zshrc
.
First, I use docker pull kicbase/stable:v0.0.39
command, then I directly use minikube start --driver=docker
.
v1.30.1
23.0.4
My console log print:
➜ ~ minikube start --driver=docker [73/1549]
😄 minikube v1.30.1 on Arch 22.1.1
✨ Using the docker driver based on user configuration
📌 Using Docker driver with root privileges
❗ Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.26.3 preload ...
> preloaded-images-k8s-v18-v1...: 397.02 MiB / 397.02 MiB 100.00% 3.50 Mi
> gcr.io/k8s-minikube/kicbase...: 373.53 MiB / 373.53 MiB 100.00% 2.44 Mi
🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
❗ Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTP_PROXY=socks5://localhost:7891 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=socks5://localhost:7891 to docker env.
🌐 Found network options:
▪ HTTP_PROXY=socks5://localhost:7891
▪ HTTPS_PROXY=socks5://localhost:7891
▪ NO_PROXY=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
▪ http_proxy=socks5://localhost:7891
▪ https_proxy=socks5://localhost:7891
▪ no_proxy=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
❗ This container is having trouble accessing https://registry.k8s.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
▪ env NO_PROXY=localhost,127.0.0.1,192.168.1.1,::1,*.local,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
And this:
➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 37m v1.26.3
I still have a question. Specifically, I was reading the official documentation for minikube and came across the section about proxy. The document mentions that "If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables" However, I'm not sure about how to go about passing the proxy connection information to Docker.. Is it like this: minikube start --driver=docker --docker-env HTTPS_PROXY=socks5://localhost:7891 --docker-env HTTP_PROXY=socks5://localhost:7891
?
I am in china , I had the same problem until add --registry-mirror , now it work well,if you are in china , 阿里云镜像服务 may help you
我在中国,在添加--registry-mirror之前遇到了同样的问题,现在它工作得很好,如果你在中国,阿里云镜像服务可能会帮助你
- 06.11 中国用户的默认阿里云镜像“registry.aliyuncs.com/google_containers”不起作用
Do you find new image in China now?
I had this problem. The k8s related images were downloaded through other methods.
(base) ➜ data minikube start --kubernetes-version=v1.27.2 😄 Darwin 13.4.1 (arm64) 上的 minikube v1.31.2 ✨ 自动选择 docker 驱动。其他选项:parallels, vmware, ssh 📌 使用具有 root 权限的 Docker Desktop 驱动程序 👍 正在集群 minikube 中启动控制平面节点 minikube 🚜 正在拉取基础镜像 ... E1020 10:52:23.574335 78567 cache.go:190] Error downloading kic artifacts: failed to download kic base image or any fallback image 🔥 正在创建 docker container(CPUs=2,内存=7803MB)... ❗ The image 'gcr.io/k8s-minikube/storage-provisioner:v5' was not found; unable to add it to cache. 🐳 正在 Docker 24.0.4 中准备 Kubernetes v1.27.2… ❌ Unable to load cached images: loading cached images: stat /Users/tovi/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory ▪ 正在生成证书和密钥... ▪ 正在启动控制平面... ▪ 配置 RBAC 规则 ... 🔗 配置 bridge CNI (Container Networking Interface) ... 🔎 正在验证 Kubernetes 组件... ▪ 正在使用镜像 gcr.io/k8s-minikube/storage-provisioner:v5 🌟 启用插件: storage-provisioner, default-storageclass 🏄 完成!kubectl 现在已配置,默认使用"minikube"集群和"default"命名空
继续跟帖,后来人注意!!! 阿里云仓库没有同步v1.23以后的k8s版本,所以指定阿里云仓库进行安装会报404(不要再指定阿里云作为镜像仓库了):
$ minikube start --force \
> --kubernetes-version=v1.25.14 \
> --image-mirror-country=cn \
> --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
* minikube v1.31.2 on Centos 7.9.2009
- KUBECONFIG=/etc/kubernetes/admin.conf
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Automatically selected the docker driver. Other choices: none, ssh
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
* https://minikube.sigs.k8s.io/docs/reference/drivers/none/
X The requested memory allocation of 1963MiB does not leave room for system overhead (total system memory: 1963MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1963mb'
* Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
* Using Docker driver with root privileges
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
! minikube was unable to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.40, but successfully downloaded docker.io/kicbase/stable:v0.0.40 as a fallback image
* Creating docker container (CPUs=2, Memory=1963MB) ...
* Preparing Kubernetes v1.25.14 on Docker 24.0.4 ...
X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: updating control plane: downloading binaries: downloading kubectl: download failed: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.25.14/bin/linux/amd64/kubectl.sha256 Dst:/root/.minikube/cache/linux/amd64/v1.25.14/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8 0x3f9c8a8] Decompressors:map[bz2:0xc000431238 gz:0xc000431290 tar:0xc000431240 tar.bz2:0xc000431250 tar.gz:0xc000431260 tar.xz:0xc000431270 tar.zst:0xc000431280 tbz2:0xc000431250 tgz:0xc000431260 txz:0xc000431270 tzst:0xc000431280 xz:0xc000431298 zip:0xc0004312a0 zst:0xc0004312b0] Getters:map[file:0xc00107fea0 http:0xc001098500 https:0xc001098550] Dir:false ProgressListener:0x3f579a0 Insecure:false DisableSymlinks:false Options:[0x12d0880]}: invalid checksum: Error downloading checksum file: bad response code: 404
Same here: 🤔
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image
Logs:
I0106 02:04:24.278493 82771 out.go:296] Setting OutFile to fd 1 ... I0106 02:04:24.278730 82771 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0106 02:04:24.278738 82771 out.go:309] Setting ErrFile to fd 2... I0106 02:04:24.278744 82771 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color I0106 02:04:24.278964 82771 root.go:338] Updating PATH: /home/admintest/.minikube/bin I0106 02:04:24.279371 82771 out.go:303] Setting JSON to false I0106 02:04:24.282603 82771 start.go:128] hostinfo: {"hostname":"centos7.vm","uptime":17528,"bootTime":1704469337,"procs":145,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.9.2009","kernelVersion":"3.10.0-1160.102.1.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"506a82f1-0c76-455c-ab6b-7059384d7baa"} I0106 02:04:24.282692 82771 start.go:138] virtualization: I0106 02:04:24.284001 82771 out.go:177] * minikube v1.32.0 on Centos 7.9.2009
stderr: Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running I0106 02:04:24.735041 82771 kic_runner.go:93] Run: sudo service kubelet stop I0106 02:04:24.735078 82771 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop] W0106 02:04:24.763676 82771 kic.go:455] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1 stdout:
stderr: Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running I0106 02:04:24.763759 82771 kicrunner.go:93] Run: docker ps -a --filter=name=k8s.(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}} I0106 02:04:24.763770 82771 kicrunner.go:114] Args: [docker exec --privileged minikube docker ps -a --filter=name=k8s.(kube-system|kubernetes-dashboard|storage-gluster|istio-operator) --format={{.ID}}] I0106 02:04:24.784993 82771 kic.go:466] unable list containers : docker: docker ps -a --filter=name=k8s.*(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1 stdout:
stderr: Error response from daemon: Container d7df58917287d2fe3ec6af8dc1feb4e9b79c2ae64a04b075838d6faafeca86ad is not running I0106 02:04:24.785016 82771 kic.go:476] successfully stopped kubernetes! I0106 02:04:24.785170 82771 kic_runner.go:93] Run: pgrep kube-apiserver I0106 02:04:24.785181 82771 kic_runner.go:114] Args: [docker exec --privileged minikube pgrep kube-apiserver] I0106 02:04:24.850422 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:27.875464 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:30.928879 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:33.982270 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:37.019185 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:40.086169 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0106 02:04:43.161857 82771 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
~ minikube start
Pulling base image ... E0814 15:09:05.833724 5268 cache.go:175] Error downloading kic artifacts: failed to download kic base image or any fallback image 🔥 Creating docker container (CPUs=2, Memory=6144MB) ... 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: exit status 125
~ docker pull gcr.io/k8s-minikube/kicbase:v0.0.11
Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
i can pull other images, just this failed