Closed jeesmon closed 2 years ago
Getting the same results on Linux, after commenting out the things that makes it run locally and have it run remotely instead.
The SSH server seems to be up and running, so I'm not sure why minikube is not able to connect to it ? Broken "gvproxy" ?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fe81650229a gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531 3 minutes ago Up 3 minutes ago 127.0.0.1:40515->22/tcp, 127.0.0.1:38433->2376/tcp, 127.0.0.1:33179->5000/tcp, 127.0.0.1:33511->8443/tcp, 127.0.0.1:38091->32443/tcp minikube
libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50262->127.0.0.1:40515: read: connection reset by peer
podman version 3.4.2
Running a podman machine ssh
to the host, copying over the minikube keys and connection has no issues:
$ podman machine ssh
Connecting to vm podman-machine-default. To close connection, use `~.` or `exit`
Warning: Permanently added '[localhost]:38575' (ECDSA) to the list of known hosts.
Fedora CoreOS 35.20220131.2.0
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/tag/coreos
[root@localhost ~]# ssh -i id_rsa -p 43059 docker@localhost
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
docker@minikube:~$
But trying to connect from the host, using the Podman networking, results in connection failure:
$ ssh -i /home/anders/.minikube/machines/minikube/id_rsa -p 43059 docker@localhost
kex_exchange_identification: read: Connection reset by peer
$ podman --remote ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
889577071f93 gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531 3 minutes ago Up 3 minutes ago 127.0.0.1:43059->22/tcp, 127.0.0.1:38871->2376/tcp, 127.0.0.1:43283->5000/tcp, 127.0.0.1:43767->8443/tcp, 127.0.0.1:43001->32443/tcp minikube
Minikube is listening to localhost (127.0.0.1), but that doesn't work with gvproxy which always dials ethernet (192.168.127.2)
tcpproxy: for incoming conn 127.0.0.1:35320, error dialing "192.168.127.2:46463": connect tcp 192.168.127.2:46463: connection was refused"
tcp 0 0 127.0.0.1:46463 0.0.0.0:* LISTEN 4485/conmon
So the listen address needs to be changed for podman machine, to stop publishing to localhost and use another address.
Thanks @afbjorklund for looking into the details
@jeesmon So the workaround, as you suggested, is to run with --listen-address=192.168.127.2
.
That's the (hardcoded) adress that gvproxy gives every machine, similar to qemu slirp's 10.0.2.15
.
Apparently you can't publish to 127.0.0.1 with podman-remote, even if that works with podman...
you are in a maze of twisty little passages, all alike
podman run -d -p 127.0.0.1:8080:80 nginx
curl http://localhost:8080
<p><em>Thank you for using nginx.</em></p>
podman --remote run -d -p 127.0.0.1:8080:80 nginx
curl http://localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
podman --remote run -d -p 192.168.127.2:8080:80 nginx
curl http://localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
podman --remote run -d -p 0.0.0.0:8080:80 nginx
curl http://localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
podman machine init --cpus 2 --memory 2048 --disk-size 20
podman machine start
podman system connection default podman-machine-default-root
minikube start --driver=podman --listen-address=192.168.127.2
๐ฅ Creating podman container (CPUs=2, Memory=1965MB) ...
๐ก minikube is not meant for production use. You are opening non-local traffic
โ Listening to 192.168.127.2. This is not recommended and can cause a security vulnerability. Use at your own risk
๐ณ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
Problem with the workaround seems to be the wrong address in config.
Unable to connect to the server: dial tcp 192.168.58.2:8443: connect: no route to host
There is no obvious way to get to the minikube network, from the host.
[root@localhost ~]# podman network ls
NETWORK ID NAME VERSION PLUGINS
2f259bab93aa podman 0.4.0 bridge,podman-machine,portmap,firewall,tuning
5086431107ca minikube 0.4.0 bridge,portmap,firewall,tuning,dnsname,podman-machine
The cluster itself looks happy enough, if you access it from the inside:
[root@localhost ~]# podman exec minikube env KUBECONFIG=/etc/kubernetes/admin.conf /var/lib/minikube/binaries/v1.23.3/kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
[root@localhost ~]# podman exec minikube env KUBECONFIG=/etc/kubernetes/admin.conf /var/lib/minikube/binaries/v1.23.3/kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 15m v1.23.3
One has to hit the VM port, on the host's localhost
, to get tunneled through.
$ podman --remote ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
718204796f7b gcr.io/k8s-minikube/kicbase-builds:v0.0.29-1644344181-13531 21 minutes ago Up 21 minutes ago 0.0.0.0:34351->22/tcp, 192.168.127.2:44189->2376/tcp, 192.168.127.2:39619->5000/tcp, 192.168.127.2:42267->8443/tcp, 192.168.127.2:43683->32443/tcp minikube
~/.kube/config
server: https://127.0.0.1:42267
name: minikube
minikube kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:42267
But minikube ssh
seems to be working, since it has 127.0.0.1 hardcoded.
(actually it has DOCKER_HOST and CONTAINER_HOST hardcoded, but)
Hacks to run podman --remote
and podman machine
on developer machines:
diff --git a/pkg/drivers/kic/oci/cli_runner.go b/pkg/drivers/kic/oci/cli_runner.go
index 9294eaeb5..9ebe740ca 100644
--- a/pkg/drivers/kic/oci/cli_runner.go
+++ b/pkg/drivers/kic/oci/cli_runner.go
@@ -74,7 +74,7 @@ func (rr RunResult) Output() string {
// PrefixCmd adds any needed prefix (such as sudo) to the command
func PrefixCmd(cmd *exec.Cmd) *exec.Cmd {
- if cmd.Args[0] == Podman && runtime.GOOS == "linux" { // want sudo when not running podman-remote
+ if cmd.Args[0] == Podman && runtime.GOOS == "linux" && false { // want sudo when not running podman-remote
cmdWithSudo := exec.Command("sudo", append([]string{"-n"}, cmd.Args...)...)
cmdWithSudo.Env = cmd.Env
cmdWithSudo.Dir = cmd.Dir
@@ -83,6 +83,7 @@ func PrefixCmd(cmd *exec.Cmd) *exec.Cmd {
cmdWithSudo.Stderr = cmd.Stderr
cmd = cmdWithSudo
}
+ cmd.Args = append([]string{"podman", "--remote"}, cmd.Args[1:]...)
return cmd
}
diff --git a/pkg/drivers/kic/oci/oci.go b/pkg/drivers/kic/oci/oci.go
index 5f5a84ce8..139308ccf 100644
--- a/pkg/drivers/kic/oci/oci.go
+++ b/pkg/drivers/kic/oci/oci.go
@@ -314,7 +314,7 @@ func createContainer(ociBin string, image string, opts ...createOpt) error {
// to run nested container from privileged container in podman https://bugzilla.redhat.com/show_bug.cgi?id=1687713
// only add when running locally (linux), when running remotely it needs to be configured on server in libpod.conf
- if ociBin == Podman && runtime.GOOS == "linux" {
+ if ociBin == Podman && runtime.GOOS == "linux" && false {
args = append(args, "--cgroup-manager", "cgroupfs")
}
diff --git a/pkg/minikube/registry/drvs/podman/podman.go b/pkg/minikube/registry/drvs/podman/podman.go
index f92220db8..2971314b4 100644
--- a/pkg/minikube/registry/drvs/podman/podman.go
+++ b/pkg/minikube/registry/drvs/podman/podman.go
@@ -111,7 +111,7 @@ func status() registry.State {
// Quickly returns an error code if service is not running
cmd := exec.CommandContext(ctx, oci.Podman, "version", "--format", "{{.Server.Version}}")
// Run with sudo on linux (local), otherwise podman-remote (as podman)
- if runtime.GOOS == "linux" {
+ if runtime.GOOS == "linux" && false {
cmd = exec.CommandContext(ctx, "sudo", "-k", "-n", oci.Podman, "version", "--format", "{{.Version}}")
cmd.Env = append(os.Environ(), "LANG=C", "LC_ALL=C") // sudo is localized
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I'm still running into this. Is this going to be addressed at some point? Does anyone still need it? Did new ways to work around this were found?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
What Happened?
minikube is not able to connect to ssh port
Log:
If I use
--listen-address=$(podman machine ssh 2>/dev/null -- ifconfig enp0s2 | grep "inet\b" | awk '{ print $2 }')
, connection is working fine.According to @afbjorklund,
--listen-address
is not needed as gvproxy supposed to tunnel the port from the host to the guest: https://github.com/containers/podman/issues/8016#issuecomment-1040576898Attach the log file
Couldn't get log in failed scenario:
Operating System
No response
Driver
No response