kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.46k stars 4.88k forks source link

Fallback to using tag is SHA256 is not present #13768

Closed flc666star closed 2 years ago

flc666star commented 2 years ago

What Happened?

I want to start minikube offline in one machine which can only be visited within the enterprise network,I took steps as following: step1: I started minikube in A machine online with command like this: minikube start --driver=docker --registry-mirror=http://vppbudzy.mirror.aliyun.com --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --alsologtostderr -v=1 --native-ssh=false step2: I copied .minikube directory from A machine to machine B. step3: I started minikube in B(offline) machine with command like this: minikube start --driver=docker --registry-mirror=http://vppbudzy.mirror.aliyun.com --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --alsologtostderr -v=1 --native-ssh=false

But, it failed, outputed contents can be found in attachment. I found some log messages contradicted each other,for example: I0311 20:03:14.114373 14562 cache.go:151] successfully saved registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 as a tarball I0311 20:03:14.114379 14562 cache.go:162] Loading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 from local cache I0311 20:03:15.531826 14562 cache.go:165] successfully loaded registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 from cached tarball ....................... ...................... Unable to find image 'registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2' locally docker: Error response from daemon: Get "https://registry.cn-hangzhou.aliyuncs.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).

other information: Both machine A (online) and machine B run in OpenSUSE 15.1. Docker run normally. Minikube version: v1.25.2

Attach the log file

I0311 20:03:13.994251 14562 out.go:297] Setting OutFile to fd 1 ... I0311 20:03:13.994320 14562 out.go:349] isatty.IsTerminal(1) = true I0311 20:03:13.994328 14562 out.go:310] Setting ErrFile to fd 2... I0311 20:03:13.994336 14562 out.go:349] isatty.IsTerminal(2) = true I0311 20:03:13.994429 14562 root.go:315] Updating PATH: /home/admin/.minikube/bin W0311 20:03:13.994542 14562 root.go:293] Error reading config file at /home/admin/.minikube/config/config.json: open /home/admin/.minikube/config/config.json: no such file or directory I0311 20:03:13.994649 14562 out.go:304] Setting JSON to false I0311 20:03:13.997571 14562 start.go:112] hostinfo: {"hostname":"iZuf6533jbifuhtgmbi2xkZ","uptime":27586,"bootTime":1646972608,"procs":105,"os":"linux","platform":"opensuse-leap","platformFamily":"suse","platformVersion":"15.1","kernelVersion":"4.12.14-lp151.28.52-default","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"20200623-1902-0867-2074-002063468121"} I0311 20:03:13.997618 14562 start.go:122] virtualization: guest I0311 20:03:13.999080 14562 out.go:176] 😄 minikube v1.25.2 on Opensuse-Leap 15.1 (amd64) 😄 minikube v1.25.2 on Opensuse-Leap 15.1 (amd64) W0311 20:03:13.999207 14562 preload.go:295] Failed to list preload files: open /home/admin/.minikube/cache/preloaded-tarball: no such file or directory I0311 20:03:13.999298 14562 notify.go:193] Checking for updates... I0311 20:03:13.999412 14562 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0311 20:03:13.999725 14562 driver.go:344] Setting default libvirt URI to qemu:///system I0311 20:03:14.029496 14562 docker.go:132] docker version: linux-19.03.11 I0311 20:03:14.029550 14562 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0311 20:03:14.057769 14562 info.go:263] docker info: {ID:SKVA:UJ5F:SLAP:YVPK:RITE:S3MA:QYE4:YXBH:LXCV:KYPX:BN5Y:6AST Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:40 SystemTime:2022-03-11 20:03:14.051074062 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.12.14-lp151.28.52-default OperatingSystem:openSUSE Leap 15.1 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8075190272 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:iZuf6533jbifuhtgmbi2xkZ Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:docker-runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0311 20:03:14.057826 14562 docker.go:237] overlay module found I0311 20:03:14.059344 14562 out.go:176] ✨ Using the docker driver based on existing profile ✨ Using the docker driver based on existing profile I0311 20:03:14.059360 14562 start.go:281] selected driver: docker I0311 20:03:14.059368 14562 start.go:798] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/home/admin/zark/:/home/admin/zark/] InsecureRegistry:[] RegistryMirror:[http://vppbudzy.mirror.aliyun.com] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/home/admin/zark/:/home/admin/zark/ Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0311 20:03:14.059447 14562 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0311 20:03:14.059635 14562 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0311 20:03:14.087576 14562 info.go:263] docker info: {ID:SKVA:UJ5F:SLAP:YVPK:RITE:S3MA:QYE4:YXBH:LXCV:KYPX:BN5Y:6AST Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:40 SystemTime:2022-03-11 20:03:14.08097984 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.12.14-lp151.28.52-default OperatingSystem:openSUSE Leap 15.1 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8075190272 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:iZuf6533jbifuhtgmbi2xkZ Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:docker-runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0311 20:03:14.087939 14562 cni.go:93] Creating CNI manager for "" I0311 20:03:14.087950 14562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0311 20:03:14.087960 14562 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[/home/admin/zark/:/home/admin/zark/] InsecureRegistry:[] RegistryMirror:[http://vppbudzy.mirror.aliyun.com] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:true MountString:/home/admin/zark/:/home/admin/zark/ Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0311 20:03:14.089471 14562 out.go:176] 👍 Starting control plane node minikube in cluster minikube 👍 Starting control plane node minikube in cluster minikube I0311 20:03:14.089488 14562 cache.go:120] Beginning downloading kic base image for docker with docker I0311 20:03:14.090658 14562 out.go:176] 🚜 Pulling base image ... 🚜 Pulling base image ... I0311 20:03:14.090753 14562 image.go:75] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0311 20:03:14.090767 14562 profile.go:148] Saving config to /home/admin/.minikube/profiles/minikube/config.json ... I0311 20:03:14.090827 14562 cache.go:107] acquiring lock: {Name:mk3eead60cba61410ae49b1e54d9c0dca6f5dd56 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.090827 14562 cache.go:107] acquiring lock: {Name:mk81d85d684f5a2b17ea50dff9df877a85cc0168 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091336 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 exists I0311 20:03:14.091360 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7" took 541.272µs I0311 20:03:14.091378 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 succeeded I0311 20:03:14.091399 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 exists I0311 20:03:14.091410 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6" took 591.995µs I0311 20:03:14.091421 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 succeeded I0311 20:03:14.091445 14562 cache.go:107] acquiring lock: {Name:mk901ce71043db284a7d0597784121cf7be816f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091495 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 exists I0311 20:03:14.091506 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0" took 64.301µs I0311 20:03:14.091519 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 succeeded I0311 20:03:14.091534 14562 cache.go:107] acquiring lock: {Name:mk0a035f22316a17275f37bfbdaa6a91fd691c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091576 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6 exists I0311 20:03:14.091585 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6" took 53.551µs I0311 20:03:14.091595 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6 succeeded I0311 20:03:14.091611 14562 cache.go:107] acquiring lock: {Name:mk762c70a310aef0878233ed678f0ec30948dce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.090911 14562 cache.go:107] acquiring lock: {Name:mkc5f6386834a87af382bb2887c5d9a0f4bc0e4e Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091661 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3 exists I0311 20:03:14.091671 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.3" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3" took 761.302µs I0311 20:03:14.091682 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.3 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3 succeeded I0311 20:03:14.091700 14562 cache.go:107] acquiring lock: {Name:mka59612f2112af099d925c48b3776e78942aaf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091736 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 exists I0311 20:03:14.091746 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1" took 51.607µs I0311 20:03:14.091758 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 succeeded I0311 20:03:14.091793 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 exists I0311 20:03:14.091803 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5" took 193.955µs I0311 20:03:14.091813 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 succeeded I0311 20:03:14.091849 14562 cache.go:107] acquiring lock: {Name:mkac1ba37d459ae0daae481e48c23c5f93ec3275 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091855 14562 cache.go:107] acquiring lock: {Name:mk0a03310f4afb37a37a89e7cd934cda3da6a4ec Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091937 14562 cache.go:107] acquiring lock: {Name:mkc42178d735fc62000ade8518993c42d68f25ad Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:14.091996 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3 exists I0311 20:03:14.092008 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.3" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3" took 76.599µs I0311 20:03:14.092023 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.3 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3 succeeded I0311 20:03:14.091923 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3 exists I0311 20:03:14.092042 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.3" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3" took 195.924µs I0311 20:03:14.092064 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.3 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3 succeeded I0311 20:03:14.092082 14562 cache.go:115] /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3 exists I0311 20:03:14.092112 14562 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.3" -> "/home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3" took 261.091µs I0311 20:03:14.092146 14562 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.3 -> /home/admin/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3 succeeded I0311 20:03:14.092172 14562 cache.go:87] Successfully saved all images to host disk. I0311 20:03:14.114233 14562 cache.go:148] Downloading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 to local cache I0311 20:03:14.114342 14562 image.go:59] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local cache directory I0311 20:03:14.114354 14562 image.go:62] Found registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local cache directory, skipping pull I0311 20:03:14.114361 14562 image.go:103] registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in cache, skipping pull I0311 20:03:14.114373 14562 cache.go:151] successfully saved registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 as a tarball I0311 20:03:14.114379 14562 cache.go:162] Loading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 from local cache I0311 20:03:15.531826 14562 cache.go:165] successfully loaded registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 from cached tarball I0311 20:03:15.531848 14562 cache.go:208] Successfully downloaded all kic artifacts I0311 20:03:15.531877 14562 start.go:313] acquiring machines lock for minikube: {Name:mk3ec630fb73d7fe5feb0335080e0da3bf1e5c1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:15.531954 14562 start.go:317] acquired machines lock for "minikube" in 62.212µs I0311 20:03:15.531968 14562 start.go:93] Skipping create...Using existing machine configuration I0311 20:03:15.531976 14562 fix.go:55] fixHost starting: I0311 20:03:15.532150 14562 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} W0311 20:03:15.555079 14562 cli_runner.go:180] docker container inspect minikube --format={{.State.Status}} returned with exit code 1 I0311 20:03:15.555126 14562 fix.go:108] recreateIfNeeded on minikube: state= err=unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:

I0311 20:03:31.447449 14562 cli_runner.go:133] Run: docker rm -f -v minikube W0311 20:03:31.469954 14562 cli_runner.go:180] docker rm -f -v minikube returned with exit code 1 W0311 20:03:31.470086 14562 delete.go:139] delete failed (probably ok) I0311 20:03:31.470095 14562 fix.go:120] Sleeping 1 second for extra luck! I0311 20:03:32.470170 14562 start.go:126] createHost starting for "" (driver="docker") I0311 20:03:32.471770 14562 out.go:203] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... 🔥 Creating docker container (CPUs=2, Memory=2200MB) ...| I0311 20:03:32.472224 14562 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0311 20:03:32.472253 14562 client.go:168] LocalClient.Create starting I0311 20:03:32.472313 14562 main.go:130] libmachine: Reading certificate data from /home/admin/.minikube/certs/ca.pem I0311 20:03:32.472344 14562 main.go:130] libmachine: Decoding PEM data... I0311 20:03:32.472364 14562 main.go:130] libmachine: Parsing certificate... I0311 20:03:32.472423 14562 main.go:130] libmachine: Reading certificate data from /home/admin/.minikube/certs/cert.pem I0311 20:03:32.472448 14562 main.go:130] libmachine: Decoding PEM data... I0311 20:03:32.472464 14562 main.go:130] libmachine: Parsing certificate... I0311 20:03:32.472680 14562 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0311 20:03:32.497005 14562 network_create.go:67] Found existing network {name:minikube subnet:0xc000f10180 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0311 20:03:32.497030 14562 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0311 20:03:32.497069 14562 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0311 20:03:32.518924 14562 cli_runner.go:133] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0311 20:03:32.540224 14562 oci.go:102] Successfully created a docker volume minikube I0311 20:03:32.540275 14562 cli_runner.go:133] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib \ W0311 20:03:44.000291 14562 notify.go:58] Error getting json from minikube version url: error with http GET for endpoint https://storage.googleapis.com/minikube/releases-v2.json: Get "https://storage.googleapis.com/minikube/releases-v2.json": dial tcp 142.251.42.240:443: i/o timeout

stderr: Error: No such container: minikube I0311 20:03:52.595104 14562 fix.go:57] fixHost completed within 37.063129411s I0311 20:03:52.595114 14562 start.go:80] releasing machines lock for "minikube", held for 37.063151664s W0311 20:03:52.595138 14562 start.go:570] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: exit status 125 stdout:

stderr: Unable to find image 'registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2' locally docker: Error response from daemon: Get "https://registry.cn-hangzhou.aliyuncs.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers). See 'docker run --help'.

I0311 20:03:52.595306 14562 start.go:585] Will try again in 5 seconds ... I0311 20:03:57.596634 14562 start.go:313] acquiring machines lock for minikube: {Name:mk3ec630fb73d7fe5feb0335080e0da3bf1e5c1b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0311 20:03:57.596737 14562 start.go:317] acquired machines lock for "minikube" in 76.14µs I0311 20:03:57.596754 14562 start.go:93] Skipping create...Using existing machine configuration I0311 20:03:57.596764 14562 fix.go:55] fixHost starting: I0311 20:03:57.596942 14562 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} W0311 20:03:57.620486 14562 cli_runner.go:180] docker container inspect minikube --format={{.State.Status}} returned with exit code 1 I0311 20:03:57.620525 14562 fix.go:108] recreateIfNeeded on minikube: state= err=unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:

stderr: Error: No such container: minikube I0311 20:04:02.126706 14562 oci.go:673] temporary error: container minikube status is but expect it to be exited I0311 20:04:02.126728 14562 retry.go:31] will retry after 2.179080774s: couldn't verify container is exited. %v: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:

I0311 20:04:16.206574 14562 cli_runner.go:133] Run: docker rm -f -v minikube W0311 20:04:16.228340 14562 cli_runner.go:180] docker rm -f -v minikube returned with exit code 1 W0311 20:04:16.228495 14562 delete.go:139] delete failed (probably ok) I0311 20:04:16.228505 14562 fix.go:120] Sleeping 1 second for extra luck! I0311 20:04:17.228547 14562 start.go:126] createHost starting for "" (driver="docker") I0311 20:04:17.230147 14562 out.go:203] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... 🔥 Creating docker container (CPUs=2, Memory=2200MB) ...| I0311 20:04:17.230265 14562 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0311 20:04:17.230284 14562 client.go:168] LocalClient.Create starting I0311 20:04:17.230330 14562 main.go:130] libmachine: Reading certificate data from /home/admin/.minikube/certs/ca.pem I0311 20:04:17.230356 14562 main.go:130] libmachine: Decoding PEM data... I0311 20:04:17.230372 14562 main.go:130] libmachine: Parsing certificate... I0311 20:04:17.230412 14562 main.go:130] libmachine: Reading certificate data from /home/admin/.minikube/certs/cert.pem I0311 20:04:17.230431 14562 main.go:130] libmachine: Decoding PEM data... I0311 20:04:17.230442 14562 main.go:130] libmachine: Parsing certificate... I0311 20:04:17.230612 14562 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0311 20:04:17.254364 14562 network_create.go:67] Found existing network {name:minikube subnet:0xc000bf3c50 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0311 20:04:17.254390 14562 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0311 20:04:17.254423 14562 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0311 20:04:17.276222 14562 cli_runner.go:133] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0311 20:04:17.297493 14562 oci.go:102] Successfully created a docker volume minikube I0311 20:04:17.297533 14562 cli_runner.go:133] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib

stderr: Error: No such container: minikube | I0311 20:04:34.558653 14562 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube / W0311 20:04:34.582159 14562 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1 I0311 20:04:34.582237 14562 retry.go:31] will retry after 469.714889ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 stdout:

stderr: Unable to find image 'registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2' locally docker: Error response from daemon: Get "https://registry.cn-hangzhou.aliyuncs.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers). See 'docker run --help'.

I0311 20:04:37.151390 14562 out.go:176]

W0311 20:04:37.151461 14562 out.go:241] ❌ Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: exit status 125 stdout:

stderr: Unable to find image 'registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2' locally docker: Error response from daemon: Get "https://registry.cn-hangzhou.aliyuncs.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).

Operating System

Other

Driver

Docker

zhan9san commented 2 years ago

Hi @flc666star

docker: Error response from daemon: Get "https://registry.cn-hangzhou.aliyuncs.com/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).

Based on the log attached, it seems there was a connection issue to registry.cn-hangzhou.aliyuncs.com.

Could you please explain the network topology in details, especially offline and online?

step2: I copied .minikube directory from A machine to machine B.

I am glad to help you on this issue, but I did not fully understand why you copied .minikube directory.

flc666star commented 2 years ago

Thanks a lot for your help. Sorry, I didn't describe the issue clearly the day before yesterday. Here, the word offline means that the machine is not connected to the Internet, so it can't pull images from other outer registries. And ther word online means that the machine is connected to the Internet. In my experiment,machine A is connected to the Internet, I started minikube in machine A successfully. And Then I copied .minikube directory from machine A to machine B which is not connected to the Internet. However, minikube started failed with same start commands as machine B. Machine B outputed too many failed logs, I removed some repeated log messages, and uploaded it to the attachment. Based on the log messages from machine B, I found the image registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 was successfully loaded from cached tarball. Why it soon complained "Unable to find image 'registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2' locally"?

zhan9san commented 2 years ago

Two Minikube bugs

  1. Digest is lost when we load image from tar file but Minikube would try to start container from sha256 digest, so the container kicbase would not start.

How to work around it We can pass --base-image to minikube start

--base-image=registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30
  1. The aliyuncs image layout is different from gcr.io image layout.

The image layout issue could be fixed by #13782.

How to start Minikube offline

Machine A has internet access whereas machine B hasn't.

  1. Cache image on machine A

    There is no need to start a Minikube instance, so we pass --download-only=true to minikube start command

    minikube start --download-only=true --driver=docker --image-mirror-country=cn --base-image=registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30
  2. Copy the image from A to B

    Create minikube dir

    mkdir ~/.minikube

    Copy cached files from A to B

    Login A and run the following command

    scp -r .minikube/cache vagrant@hostname-B:~/.minikube/
  3. Start Minikube on B

    minikube start --driver=docker --image-mirror-country=cn --base-image=registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.30
flc666star commented 2 years ago

Thank you very much.

Your suggestion proved to be right, and I solved the question successfully.

klaases commented 2 years ago

PR #13787 has been approved and is awaiting further approval and merge.