Open pierrebeaucamp opened 3 years ago
minikube tunnel
will normally give you a random port, as seen in minikube service
.
$ minikube ip
192.168.39.199
$ minikube service hello-minikube1
|-----------|-----------------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------|-------------|-----------------------------|
| default | hello-minikube1 | 8080 | http://192.168.39.199:31737 |
|-----------|-----------------|-------------|-----------------------------|
🎉 Opening service default/hello-minikube1 in default browser...
With this driver (with a real IP), it is quite similar to running the service with NodePort.
There is no good way to fake the broken network yet, it's driver.NeedsPortForward
Sorry, I'm not sure if I follow you.
minikube service
.minikube tunnel
as described in the docs.When running minikube using the hyperkit
or virtualbox
drivers, it behaves as expected: minikube tunnel
seems to create routes from my local machine to the cluster CIDR, so I can access various LoadBalancer
services using a cluster internal IP (shows up under the externalIPs
field of a service and is distinctly different from the IP returned by minikube ip
).
However, this seems to be broken when using Parallels as the driver. I'm assuming this is a bug since I don't see any remarks on the driver page.
Unfortunately, I'm not sure what mean by "There is no good way to fake the broken network yet, it's driver.NeedsPortForward
"
Unfortunately, I'm not sure what mean by "There is no good way to fake the broken network yet, it's
driver.NeedsPortForward
"
Normally minikube tunnel
is used by the Docker Desktop driver, which is hard to emulate on other systems such as Linux
// NeedsPortForward returns true if driver is unable provide direct IP connectivity
func NeedsPortForward(name string) bool {
if !IsKIC(name) {
return false
}
if oci.IsExternalDaemonHost(name) {
return true
}
// Docker for Desktop
return runtime.GOOS == "darwin" || runtime.GOOS == "windows" || detect.IsMicrosoftWSL()
}
It could be something specific to parallels on macos, so please disregard the comment if it is indeed working for other drivers.
If you run with --alsologtostderr
, you should see the individual operations being performed (both for start and for tunnel)
@pierrebeaucamp : what I meant was that there are the two ways of setting up the tunnels, either routes or ssh
It seems that the networking in the Parallels driver is different than in the other ones, maybe it is missing the device ?
For VirtualBox we have one eth0
NAT interface (without an IP) and one eth1
HostOnly interface (with the IP)
So the ssh connection is tunneled in through the first one, and the ingress is routed in through the second one.
If there is indeed no virtual network interface, then minikube tunnel
could have used the other way (port forwarding)
But I haven't verified it myself.
Here is the more detailed output of the tunnel
command
I0507 15:01:10.020313 38452 out.go:278] Setting OutFile to fd 1 ... I0507 15:01:10.020477 38452 out.go:330] isatty.IsTerminal(1) = false I0507 15:01:10.020483 38452 out.go:291] Setting ErrFile to fd 2... I0507 15:01:10.020489 38452 out.go:330] isatty.IsTerminal(2) = false I0507 15:01:10.020595 38452 root.go:317] Updating PATH: /Users/pierrebeaucamp/.minikube/bin I0507 15:01:10.021039 38452 mustload.go:65] Loading cluster: minikube I0507 15:01:10.022107 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list minikube --output status --no-header I0507 15:01:10.197020 38452 host.go:66] Checking if "minikube" exists ... I0507 15:01:10.197346 38452 api_server.go:146] Checking apiserver status ... I0507 15:01:10.197424 38452 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 15:01:10.197486 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list minikube --output status --no-header I0507 15:01:10.365427 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list -i minikube I0507 15:01:10.546209 38452 main.go:126] libmachine: Found lease: 10.211.55.12 for MAC: 001C42F7473B, expiring at 1620415646, leased for 1800 s. I0507 15:01:10.546235 38452 main.go:126] libmachine: Found IP lease: 10.211.55.12 for MAC address 001C42F7473B I0507 15:01:10.546250 38452 sshutil.go:53] new ssh client: &{IP:10.211.55.12 Port:22 SSHKeyPath:/Users/pierrebeaucamp/.minikube/machines/minikube/id_rsa Username:docker} I0507 15:01:10.603017 38452 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/3898/cgroup I0507 15:01:10.608955 38452 api_server.go:162] apiserver freezer: "10:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8931d01e4ffe12d0fd564ad495b05402.slice/docker-f0ea4b4a23c4e8e637c6cc2527fe4e0bcbdd3cfab550b4d9d2415d6ce1a9df9a.scope" I0507 15:01:10.609101 38452 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8931d01e4ffe12d0fd564ad495b05402.slice/docker-f0ea4b4a23c4e8e637c6cc2527fe4e0bcbdd3cfab550b4d9d2415d6ce1a9df9a.scope/freezer.state I0507 15:01:10.618556 38452 api_server.go:184] freezer state: "THAWED" I0507 15:01:10.618760 38452 api_server.go:221] Checking apiserver healthz at https://10.211.55.12:8443/healthz ... I0507 15:01:10.634930 38452 api_server.go:241] https://10.211.55.12:8443/healthz returned 200: ok I0507 15:01:10.634962 38452 tunnel.go:57] Checking for tunnels to cleanup... I0507 15:01:10.645676 38452 host.go:66] Checking if "minikube" exists ... I0507 15:01:10.645952 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list minikube --output status --no-header I0507 15:01:10.819259 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list minikube --output status --no-header I0507 15:01:10.989681 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list -i minikube I0507 15:01:11.172307 38452 main.go:126] libmachine: Found lease: 10.211.55.12 for MAC: 001C42F7473B, expiring at 1620415646, leased for 1800 s. I0507 15:01:11.172336 38452 main.go:126] libmachine: Found IP lease: 10.211.55.12 for MAC address 001C42F7473B I0507 15:01:11.172421 38452 tunnel_manager.go:71] Setting up tunnel... I0507 15:01:11.172489 38452 tunnel_manager.go:81] Started minikube tunnel. I0507 15:01:16.176956 38452 host.go:66] Checking if "minikube" exists ... I0507 15:01:16.177745 38452 main.go:126] libmachine: executing: /usr/local/bin/prlctl list minikube --output status --no-header I0507 15:01:16.362174 38452 route_darwin.go:187] preparing DNS forwarding config in "/etc/resolver/cluster.local": nameserver 10.96.0.10 search_order 1 I0507 15:01:16.422098 38452 route_darwin.go:226] DNS forwarding now configured in "/etc/resolver/cluster.local" I0507 15:01:16.422315 38452 route_darwin.go:49] Adding route for CIDR 10.96.0.0/12 to gateway 10.211.55.12 I0507 15:01:16.422414 38452 route_darwin.go:51] About to run command: [sudo route -n add 10.96.0.0/12 10.211.55.12] I0507 15:01:16.460580 38452 route_darwin.go:58] add net 10.96.0.0: gateway 10.211.55.12 I0507 15:01:16.487184 38452 loadbalancer_patcher.go:80] hello-minikube1 is type LoadBalancer. I0507 15:01:16.497987 38452 loadbalancer_patcher.go:122] Patched hello-minikube1 with IP 10.98.10.229 Status: machine: minikube pid: 38452 route: 10.96.0.0/12 -> 10.211.55.12 minikube: Running services: [hello-minikube1] errors: minikube: no errors router: no errors loadbalancer emulator: no errors
The route seems to be added correctly to the route table (selected output from netstat -rn
):
10.96/12 10.211.55.12 UGSc bridge1
10.211.55/24 link#12 UC bridge1 !
10.211.55.12 0.1c.42.f7.47.3b UHLWIi bridge1 1159
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Unfortunately, none of our maintainers have an easy way to test minikube with the parallels driver, and therefore it's being maintained on a best effort basis. If anyone has a way to replicate this and fix it, we would love the help.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/assign
I can investigate this; I will update what I find.
Parallels is fully supported as a virtualization driver for Minikube on Intel-based Mac computers. However, it is important to note that using Parallels as a virtualization driver with the ARM64 architecture is not currently supported.
Parallels notes that driver does not support Apple Silicon yet; see here. They recommend using the docker
driver. I would need to test this on my 2015 MacBook Pro, which has an Intel processor. It's currently loaned out to someone, so I will attempt to start the driver in a week or so.
As the title says, somehow
minikube tunnel
doesn't seem to work with the Parallels Driver. I'm using Minikube v1.19.0 (commit 15cede53bdc5fe242228853e737333b09d4336b5) installed through homebrew, and Parallels 16.5.0 (49183).I don't have this issue when using
hyperkit
as the driver.Steps to reproduce the issue:
minikube start --driver=parallels
minikube tunnel
(in separate terminal)kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube1 --type=LoadBalancer --port=8080
curl -v <EXTERNAL-IP>:8080
Full output of failed command:
Full output of `minikube start` command used: