kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

Minikube failing to start when running with podman machine on Mac #13618

Closed jeesmon closed 2 years ago

jeesmon commented 2 years ago

What Happened?

minikube is not able to connect to ssh port

podman ps -a

CONTAINER ID  IMAGE                                COMMAND     CREATED         STATUS             PORTS                                                                                                                                 NAMES
beb0e2098bda  gcr.io/k8s-minikube/kicbase:v0.0.29              26 seconds ago  Up 26 seconds ago  127.0.0.1:32913->22/tcp, 127.0.0.1:33795->2376/tcp, 127.0.0.1:38943->5000/tcp, 127.0.0.1:33543->8443/tcp, 127.0.0.1:40069->32443/tcp  minikube

Log:

minikube start --driver=podman --alsologtostderr
I0215 13:48:58.857426   43393 out.go:297] Setting OutFile to fd 1 ...
I0215 13:48:58.857726   43393 out.go:349] isatty.IsTerminal(1) = true
I0215 13:48:58.857736   43393 out.go:310] Setting ErrFile to fd 2...
I0215 13:48:58.857744   43393 out.go:349] isatty.IsTerminal(2) = true
I0215 13:48:58.857867   43393 root.go:315] Updating PATH: /Users/jjacob/.minikube/bin
I0215 13:48:58.858458   43393 out.go:304] Setting JSON to false
I0215 13:48:58.961235   43393 start.go:112] hostinfo: {"hostname":"Jeesmons-MacBook-Pro.local","uptime":242764,"bootTime":1644708174,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.1","kernelVersion":"21.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"8d244d14-e225-516e-984c-386cf7c50487"}
W0215 13:48:58.961427   43393 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0215 13:48:58.966727   43393 out.go:176] ๐Ÿ˜„  minikube v1.25.1 on Darwin 12.1
๐Ÿ˜„  minikube v1.25.1 on Darwin 12.1
I0215 13:48:58.966962   43393 notify.go:174] Checking for updates...
I0215 13:48:58.967370   43393 driver.go:344] Setting default libvirt URI to qemu:///system
I0215 13:48:59.350511   43393 podman.go:121] podman version: 3.4.4
I0215 13:48:59.354917   43393 out.go:176] โœจ  Using the podman (experimental) driver based on user configuration
โœจ  Using the podman (experimental) driver based on user configuration
I0215 13:48:59.354960   43393 start.go:280] selected driver: podman
I0215 13:48:59.354972   43393 start.go:795] validating driver "podman" against <nil>
I0215 13:48:59.354991   43393 start.go:806] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 13:48:59.355070   43393 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
I0215 13:48:59.355294   43393 cli_runner.go:133] Run: podman system info --format json
I0215 13:48:59.650560   43393 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:1314058240 MemTotal:2061131776 OCIRuntime:{Name:crun Package:crun-1.4.1-1.fc35.x86_64 Path:/usr/bin/crun Version:crun version 1.4.1
commit: 802613580a3f25a88105ce4b78126202fef51dfb
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:amd64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.15.17-200.fc35.x86_64 Os:linux Rootless:false Uptime:20h 56m 31.42s (Approximately 0.83 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:2} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0215 13:48:59.650859   43393 start_flags.go:286] no existing cluster config was found, will generate one from the flags
I0215 13:48:59.651186   43393 start_flags.go:367] Using suggested 1965MB memory alloc based on sys=16384MB, container=1965MB
I0215 13:48:59.651347   43393 start_flags.go:796] Wait components to verify : map[apiserver:true system_pods:true]
I0215 13:48:59.651385   43393 cni.go:93] Creating CNI manager for ""
I0215 13:48:59.651397   43393 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0215 13:48:59.651408   43393 start_flags.go:300] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1965 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0215 13:48:59.657735   43393 out.go:176] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I0215 13:48:59.657816   43393 cache.go:120] Beginning downloading kic base image for podman with docker
I0215 13:48:59.672699   43393 out.go:176] ๐Ÿšœ  Pulling base image ...
๐Ÿšœ  Pulling base image ...
I0215 13:48:59.672772   43393 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker
I0215 13:48:59.672847   43393 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
I0215 13:48:59.672875   43393 preload.go:148] Found local preload: /Users/jjacob/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4
I0215 13:48:59.672906   43393 cache.go:57] Caching tarball of preloaded images
I0215 13:48:59.673165   43393 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
I0215 13:48:59.673192   43393 preload.go:174] Found /Users/jjacob/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0215 13:48:59.673200   43393 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory, skipping pull
I0215 13:48:59.673217   43393 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in cache, skipping pull
I0215 13:48:59.673255   43393 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on docker
I0215 13:48:59.673257   43393 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b as a tarball
I0215 13:48:59.673803   43393 profile.go:147] Saving config to /Users/jjacob/.minikube/profiles/minikube/config.json ...
I0215 13:48:59.673907   43393 lock.go:35] WriteFile acquiring /Users/jjacob/.minikube/profiles/minikube/config.json: {Name:mkb4e2029cf49ff2f73dfec955a059090fefb841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
E0215 13:48:59.674493   43393 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
I0215 13:48:59.674514   43393 cache.go:208] Successfully downloaded all kic artifacts
I0215 13:48:59.674565   43393 start.go:313] acquiring machines lock for minikube: {Name:mkefdc66e0cc4d67ebf49a592786673632695685 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0215 13:48:59.674818   43393 start.go:317] acquired machines lock for "minikube" in 230.297ยตs
I0215 13:48:59.674901   43393 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:1965 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
I0215 13:48:59.675316   43393 start.go:126] createHost starting for "" (driver="podman")
I0215 13:48:59.687097   43393 out.go:203] ๐Ÿ”ฅ  Creating podman container (CPUs=2, Memory=1965MB) ...
๐Ÿ”ฅ  Creating podman container (CPUs=2, Memory=1965MB) ...| I0215 13:48:59.687510   43393 start.go:160] libmachine.API.Create for "minikube" (driver="podman")
I0215 13:48:59.687557   43393 client.go:168] LocalClient.Create starting
I0215 13:48:59.687794   43393 main.go:130] libmachine: Reading certificate data from /Users/jjacob/.minikube/certs/ca.pem
I0215 13:48:59.687906   43393 main.go:130] libmachine: Decoding PEM data...
I0215 13:48:59.687941   43393 main.go:130] libmachine: Parsing certificate...
I0215 13:48:59.688091   43393 main.go:130] libmachine: Reading certificate data from /Users/jjacob/.minikube/certs/cert.pem
I0215 13:48:59.688195   43393 main.go:130] libmachine: Decoding PEM data...
I0215 13:48:59.688224   43393 main.go:130] libmachine: Parsing certificate...
I0215 13:48:59.689206   43393 cli_runner.go:133] Run: podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
- W0215 13:48:59.979667   43393 cli_runner.go:180] podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}" returned with exit code 125
I0215 13:48:59.979919   43393 network_create.go:254] running [podman network inspect minikube] to gather additional debugging logs...
I0215 13:48:59.979988   43393 cli_runner.go:133] Run: podman network inspect minikube
/ W0215 13:49:00.248470   43393 cli_runner.go:180] podman network inspect minikube returned with exit code 125
I0215 13:49:00.248518   43393 network_create.go:257] error running [podman network inspect minikube]: podman network inspect minikube: exit status 125
stdout:
[]

stderr:
Error: error inspecting object: no such network "minikube"
I0215 13:49:00.248548   43393 network_create.go:259] output of [podman network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: error inspecting object: no such network "minikube"

** /stderr **
I0215 13:49:00.248789   43393 cli_runner.go:133] Run: podman network inspect podman --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
| I0215 13:49:00.503527   43393 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000100e0] misses:0}
I0215 13:49:00.503586   43393 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0215 13:49:00.503608   43393 network_create.go:106] attempt to create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
I0215 13:49:00.503769   43393 cli_runner.go:133] Run: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube
- W0215 13:49:00.760103   43393 cli_runner.go:180] podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube returned with exit code 125
W0215 13:49:00.760177   43393 network_create.go:98] failed to create podman network minikube 192.168.49.0/24, will retry: network gateway is taken
I0215 13:49:00.761006   43393 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000100e0] amended:false}} dirty:map[] misses:0}
I0215 13:49:00.761034   43393 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0215 13:49:00.761886   43393 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000100e0] amended:true}} dirty:map[192.168.49.0:0xc0000100e0 192.168.58.0:0xc000138a58] misses:0}
I0215 13:49:00.761931   43393 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0215 13:49:00.761948   43393 network_create.go:106] attempt to create podman network minikube 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0 ...
I0215 13:49:00.762149   43393 cli_runner.go:133] Run: podman network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 --label=created_by.minikube.sigs.k8s.io=true minikube
/ I0215 13:49:01.036705   43393 network_create.go:90] podman network minikube 192.168.58.0/24 created
I0215 13:49:01.036774   43393 kic.go:106] calculated static IP "192.168.58.2" for the "minikube" container
I0215 13:49:01.037327   43393 cli_runner.go:133] Run: podman ps -a --format {{.Names}}
\ I0215 13:49:01.292174   43393 cli_runner.go:133] Run: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
- I0215 13:49:01.593019   43393 oci.go:102] Successfully created a podman volume minikube
I0215 13:49:01.593447   43393 cli_runner.go:133] Run: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.29 -d /var/lib
- I0215 13:49:04.719142   43393 cli_runner.go:186] Completed: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.29 -d /var/lib: (3.125557818s)
I0215 13:49:04.719199   43393 oci.go:106] Successfully prepared a podman volume minikube
I0215 13:49:04.719237   43393 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime docker
I0215 13:49:04.719299   43393 kic.go:179] Starting extracting preloaded images to volume ...
I0215 13:49:04.719543   43393 cli_runner.go:133] Run: podman run --rm --entrypoint /usr/bin/tar -v /Users/jjacob/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29 -I lz4 -xf /preloaded.tar -C /extractDir
\ I0215 13:49:22.118491   43393 cli_runner.go:186] Completed: podman run --rm --entrypoint /usr/bin/tar -v /Users/jjacob/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29 -I lz4 -xf /preloaded.tar -C /extractDir: (17.398691804s)
I0215 13:49:22.118555   43393 kic.go:188] duration metric: took 17.399192 seconds to extract preloaded images to volume
I0215 13:49:22.118821   43393 cli_runner.go:133] Run: podman info --format "'{{json .SecurityOptions}}'"
- W0215 13:49:22.472442   43393 cli_runner.go:180] podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0215 13:49:22.472792   43393 cli_runner.go:133] Run: podman run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.58.2 --volume minikube:/var:exec --memory-swap=1965mb --memory=1965mb --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29
\ I0215 13:49:23.721412   43393 cli_runner.go:186] Completed: podman run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.58.2 --volume minikube:/var:exec --memory-swap=1965mb --memory=1965mb --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29: (1.248441773s)
I0215 13:49:23.721613   43393 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Running}}
\ I0215 13:49:24.151750   43393 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
- I0215 13:49:24.414501   43393 cli_runner.go:133] Run: podman exec minikube stat /var/lib/dpkg/alternatives/iptables
/ I0215 13:49:24.767300   43393 oci.go:281] the created container "minikube" has a running status.
I0215 13:49:24.767357   43393 kic.go:210] Creating ssh key for kic: /Users/jjacob/.minikube/machines/minikube/id_rsa...
- I0215 13:49:24.844447   43393 kic_runner.go:191] podman (temp): /Users/jjacob/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0215 13:49:24.856336   43393 kic_runner.go:276] Run: /usr/local/bin/podman exec -i minikube tee /home/docker/.ssh/authorized_keys
- I0215 13:49:25.237110   43393 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
/ I0215 13:49:25.509126   43393 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0215 13:49:25.509169   43393 kic_runner.go:114] Args: [podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
/ I0215 13:49:26.327064   43393 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
/ I0215 13:49:26.745931   43393 machine.go:88] provisioning docker machine ...
I0215 13:49:26.745998   43393 ubuntu.go:169] provisioning hostname "minikube"
I0215 13:49:26.746325   43393 cli_runner.go:133] Run: podman version --format {{.Version}}
| I0215 13:49:27.428134   43393 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ I0215 13:49:27.742216   43393 main.go:130] libmachine: Using SSH client type: native
I0215 13:49:27.742766   43393 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x439e7e0] 0x43a18c0 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
I0215 13:49:27.742795   43393 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0215 13:49:27.751888   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62893->127.0.0.1:32913: read: connection reset by peer
/ I0215 13:49:30.762261   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62900->127.0.0.1:32913: read: connection reset by peer
\ I0215 13:49:33.772015   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62904->127.0.0.1:32913: read: connection reset by peer
| I0215 13:49:36.782445   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62909->127.0.0.1:32913: read: connection reset by peer
- I0215 13:49:39.793503   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62914->127.0.0.1:32913: read: connection reset by peer
| I0215 13:49:42.803247   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62921->127.0.0.1:32913: read: connection reset by peer
- I0215 13:49:45.814135   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62925->127.0.0.1:32913: read: connection reset by peer
| I0215 13:49:48.829617   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62926->127.0.0.1:32913: read: connection reset by peer
- I0215 13:49:51.839323   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62930->127.0.0.1:32913: read: connection reset by peer
| I0215 13:49:54.850193   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62931->127.0.0.1:32913: read: connection reset by peer
- I0215 13:49:57.860084   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62932->127.0.0.1:32913: read: connection reset by peer
| I0215 13:50:00.869454   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62935->127.0.0.1:32913: read: connection reset by peer
- I0215 13:50:03.879371   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62936->127.0.0.1:32913: read: connection reset by peer
\ I0215 13:50:06.889714   43393 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:62937->127.0.0.1:32913: read: connection reset by peer

If I use --listen-address=$(podman machine ssh 2>/dev/null -- ifconfig enp0s2 | grep "inet\b" | awk '{ print $2 }'), connection is working fine.

According to @afbjorklund, --listen-address is not needed as gvproxy supposed to tunnel the port from the host to the guest: https://github.com/containers/podman/issues/8016#issuecomment-1040576898

minikube start --driver=podman --listen-address=$(podman machine ssh 2>/dev/null -- ifconfig enp0s2 | grep "inet\b" | awk '{ print $2 }') --alsologtostderr

....
๐ŸŒŸ  Enabled addons: storage-provisioner, default-storageclass
I0215 13:46:10.813275   42972 addons.go:417] enableAddons completed in 6.317064361s
I0215 13:46:11.141673   42972 start.go:493] kubectl: 1.23.3, cluster: 1.23.1 (minor skew: 0)
I0215 13:46:11.146953   42972 out.go:176] ๐Ÿ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
๐Ÿ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Attach the log file

Couldn't get log in failed scenario:

minikube logs --file=log.txt
E0215 13:51:46.174518   43513 logs.go:271] Failed to list containers for "kube-apiserver": docker: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:62992->127.0.0.1:32913: read: connection reset by peer
E0215 13:51:48.577645   43513 logs.go:271] Failed to list containers for "etcd": docker: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:63003->127.0.0.1:32913: read: connection reset by peer

Operating System

No response

Driver

No response

afbjorklund commented 2 years ago

Getting the same results on Linux, after commenting out the things that makes it run locally and have it run remotely instead.

The SSH server seems to be up and running, so I'm not sure why minikube is not able to connect to it ? Broken "gvproxy" ?

CONTAINER ID  IMAGE                                                        COMMAND     CREATED        STATUS            PORTS                                                                                                                                 NAMES
3fe81650229a  gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531              3 minutes ago  Up 3 minutes ago  127.0.0.1:40515->22/tcp, 127.0.0.1:38433->2376/tcp, 127.0.0.1:33179->5000/tcp, 127.0.0.1:33511->8443/tcp, 127.0.0.1:38091->32443/tcp  minikube

libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50262->127.0.0.1:40515: read: connection reset by peer

podman version 3.4.2

afbjorklund commented 2 years ago

Running a podman machine ssh to the host, copying over the minikube keys and connection has no issues:

$ podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Warning: Permanently added '[localhost]:38575' (ECDSA) to the list of known hosts.
Fedora CoreOS 35.20220131.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos

[root@localhost ~]# ssh -i id_rsa -p 43059 docker@localhost
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

docker@minikube:~$ 

But trying to connect from the host, using the Podman networking, results in connection failure:

$ ssh -i /home/anders/.minikube/machines/minikube/id_rsa -p 43059 docker@localhost
kex_exchange_identification: read: Connection reset by peer
$ podman --remote ps
CONTAINER ID  IMAGE                                                        COMMAND     CREATED        STATUS            PORTS                                                                                                                                 NAMES
889577071f93  gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531              3 minutes ago  Up 3 minutes ago  127.0.0.1:43059->22/tcp, 127.0.0.1:38871->2376/tcp, 127.0.0.1:43283->5000/tcp, 127.0.0.1:43767->8443/tcp, 127.0.0.1:43001->32443/tcp  minikube
afbjorklund commented 2 years ago

Minikube is listening to localhost (127.0.0.1), but that doesn't work with gvproxy which always dials ethernet (192.168.127.2)

tcpproxy: for incoming conn 127.0.0.1:35320, error dialing "192.168.127.2:46463": connect tcp 192.168.127.2:46463: connection was refused"

tcp        0      0 127.0.0.1:46463         0.0.0.0:*               LISTEN      4485/conmon        

So the listen address needs to be changed for podman machine, to stop publishing to localhost and use another address.

jeesmon commented 2 years ago

Thanks @afbjorklund for looking into the details

afbjorklund commented 2 years ago

@jeesmon So the workaround, as you suggested, is to run with --listen-address=192.168.127.2.

That's the (hardcoded) adress that gvproxy gives every machine, similar to qemu slirp's 10.0.2.15.

Apparently you can't publish to 127.0.0.1 with podman-remote, even if that works with podman...

you are in a maze of twisty little passages, all alike


podman run -d -p 127.0.0.1:8080:80 nginx curl http://localhost:8080 <p><em>Thank you for using nginx.</em></p>

podman --remote run -d -p 127.0.0.1:8080:80 nginx curl http://localhost:8080 curl: (7) Failed to connect to localhost port 8080: Connection refused

podman --remote run -d -p 192.168.127.2:8080:80 nginx curl http://localhost:8080 curl: (7) Failed to connect to localhost port 8080: Connection refused

podman --remote run -d -p 0.0.0.0:8080:80 nginx curl http://localhost:8080 curl: (7) Failed to connect to localhost port 8080: Connection refused

afbjorklund commented 2 years ago

podman machine init --cpus 2 --memory 2048 --disk-size 20 podman machine start podman system connection default podman-machine-default-root

minikube start --driver=podman --listen-address=192.168.127.2

๐Ÿ”ฅ  Creating podman container (CPUs=2, Memory=1965MB) ...
๐Ÿ’ก  minikube is not meant for production use. You are opening non-local traffic
โ—  Listening to 192.168.127.2. This is not recommended and can cause a security vulnerability. Use at your own risk
๐Ÿณ  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
afbjorklund commented 2 years ago

Problem with the workaround seems to be the wrong address in config.

Unable to connect to the server: dial tcp 192.168.58.2:8443: connect: no route to host

There is no obvious way to get to the minikube network, from the host.

[root@localhost ~]# podman network ls
NETWORK ID    NAME        VERSION     PLUGINS
2f259bab93aa  podman      0.4.0       bridge,podman-machine,portmap,firewall,tuning
5086431107ca  minikube    0.4.0       bridge,portmap,firewall,tuning,dnsname,podman-machine

The cluster itself looks happy enough, if you access it from the inside:

[root@localhost ~]# podman exec minikube env KUBECONFIG=/etc/kubernetes/admin.conf /var/lib/minikube/binaries/v1.23.3/kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
[root@localhost ~]# podman exec minikube env KUBECONFIG=/etc/kubernetes/admin.conf /var/lib/minikube/binaries/v1.23.3/kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   15m   v1.23.3

One has to hit the VM port, on the host's localhost, to get tunneled through.

$ podman --remote ps
CONTAINER ID  IMAGE                                                        COMMAND     CREATED         STATUS             PORTS                                                                                                                                               NAMES
718204796f7b  gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531              21 minutes ago  Up 21 minutes ago  0.0.0.0:34351->22/tcp, 192.168.127.2:44189->2376/tcp, 192.168.127.2:39619->5000/tcp, 192.168.127.2:42267->8443/tcp, 192.168.127.2:43683->32443/tcp  minikube

~/.kube/config

    server: https://127.0.0.1:42267
  name: minikube

minikube kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:42267

But minikube ssh seems to be working, since it has 127.0.0.1 hardcoded. (actually it has DOCKER_HOST and CONTAINER_HOST hardcoded, but)

afbjorklund commented 2 years ago

Hacks to run podman --remote and podman machine on developer machines:

diff --git a/pkg/drivers/kic/oci/cli_runner.go b/pkg/drivers/kic/oci/cli_runner.go
index 9294eaeb5..9ebe740ca 100644
--- a/pkg/drivers/kic/oci/cli_runner.go
+++ b/pkg/drivers/kic/oci/cli_runner.go
@@ -74,7 +74,7 @@ func (rr RunResult) Output() string {

 // PrefixCmd adds any needed prefix (such as sudo) to the command
 func PrefixCmd(cmd *exec.Cmd) *exec.Cmd {
-       if cmd.Args[0] == Podman && runtime.GOOS == "linux" { // want sudo when not running podman-remote
+       if cmd.Args[0] == Podman && runtime.GOOS == "linux" && false { // want sudo when not running podman-remote
                cmdWithSudo := exec.Command("sudo", append([]string{"-n"}, cmd.Args...)...)
                cmdWithSudo.Env = cmd.Env
                cmdWithSudo.Dir = cmd.Dir
@@ -83,6 +83,7 @@ func PrefixCmd(cmd *exec.Cmd) *exec.Cmd {
                cmdWithSudo.Stderr = cmd.Stderr
                cmd = cmdWithSudo
        }
+       cmd.Args = append([]string{"podman", "--remote"}, cmd.Args[1:]...)
        return cmd
 }

diff --git a/pkg/drivers/kic/oci/oci.go b/pkg/drivers/kic/oci/oci.go
index 5f5a84ce8..139308ccf 100644
--- a/pkg/drivers/kic/oci/oci.go
+++ b/pkg/drivers/kic/oci/oci.go
@@ -314,7 +314,7 @@ func createContainer(ociBin string, image string, opts ...createOpt) error {

        // to run nested container from privileged container in podman https://bugzilla.redhat.com/show_bug.cgi?id=1687713
        // only add when running locally (linux), when running remotely it needs to be configured on server in libpod.conf
-       if ociBin == Podman && runtime.GOOS == "linux" {
+       if ociBin == Podman && runtime.GOOS == "linux" && false {
                args = append(args, "--cgroup-manager", "cgroupfs")
        }

diff --git a/pkg/minikube/registry/drvs/podman/podman.go b/pkg/minikube/registry/drvs/podman/podman.go
index f92220db8..2971314b4 100644
--- a/pkg/minikube/registry/drvs/podman/podman.go
+++ b/pkg/minikube/registry/drvs/podman/podman.go
@@ -111,7 +111,7 @@ func status() registry.State {
        // Quickly returns an error code if service is not running
        cmd := exec.CommandContext(ctx, oci.Podman, "version", "--format", "{{.Server.Version}}")
        // Run with sudo on linux (local), otherwise podman-remote (as podman)
-       if runtime.GOOS == "linux" {
+       if runtime.GOOS == "linux" && false {
                cmd = exec.CommandContext(ctx, "sudo", "-k", "-n", oci.Podman, "version", "--format", "{{.Version}}")
                cmd.Env = append(os.Environ(), "LANG=C", "LC_ALL=C") // sudo is localized
        }
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

cpereira-aurora commented 2 years ago

I'm still running into this. Is this going to be addressed at some point? Does anyone still need it? Did new ways to work around this were found?

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/minikube/issues/13618#issuecomment-1213612408): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.