Closed afbjorklund closed 2 months ago
Some code is duplicated between pkg/drivers/hyperkit, pkg/drivers/qemu and pkg/drivers/vfkit.
It could be considered, to (later!) refactor this into the "common" part under pkg/drivers directly...
This is a very exciting PR ! thank you for thinking about it it is not working for me on macos M1 (arm64)
I0815 15:38:39.293989 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ca:96:e9:43:b9:43 ID:1,ca:96:e9:43:b9:43 Lease:0x646e4c8e}
I0815 15:38:41.295145 63339 main.go:141] libmachine: Attempt 28
I0815 15:38:41.295195 63339 main.go:141] libmachine: Searching for 76:70:dd:1:99:e2 in /var/db/dhcpd_leases ...
I0815 15:38:41.295760 63339 main.go:141] libmachine: Found 34 entries in /var/db/dhcpd_leases!
I0815 15:38:41.295820 63339 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:7e:b:7b:f9:40:ae ID:1,7e:b:7b:f9:40:ae Lease:0x66aac1c1}
I0815 15:38:41.295843 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.34 HWAddress:8a:37:82:8d:50:f0 ID:1,8a:37:82:8d:50:f0 Lease:0x668d8331}
I0815 15:38:41.295863 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.33 HWAddress:a:94:4f:77:5f:83 ID:1,a:94:4f:77:5f:83 Lease:0x66649746}
I0815 15:38:41.295888 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.32 HWAddress:ea:cc:a6:94:f1:58 ID:1,ea:cc:a6:94:f1:58 Lease:0x66623678}
I0815 15:38:41.296047 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.31 HWAddress:f2:8f:83:8:6e:9e ID:1,f2:8f:83:8:6e:9e Lease:0x664e2690}
I0815 15:38:41.296081 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.30 HWAddress:2e:ee:ae:b:15:c5 ID:1,2e:ee:ae:b:15:c5 Lease:0x663eb6d8}
I0815 15:38:41.296119 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.29 HWAddress:6:8c:7d:21:84:7a ID:1,6:8c:7d:21:84:7a Lease:0x663eaece}
I0815 15:38:41.296142 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.28 HWAddress:a6:91:9b:ff:1a:c7 ID:1,a6:91:9b:ff:1a:c7 Lease:0x663eaddc}
I0815 15:38:41.296164 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.27 HWAddress:b6:15:58:c6:cf:c3 ID:1,b6:15:58:c6:cf:c3 Lease:0x663e9198}
I0815 15:38:41.296187 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.26 HWAddress:26:12:63:a2:cf:af ID:1,26:12:63:a2:cf:af Lease:0x6633c7b9}
I0815 15:38:41.296208 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.25 HWAddress:12:da:f7:50:c2:27 ID:1,12:da:f7:50:c2:27 Lease:0x66327ee3}
I0815 15:38:41.296230 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.24 HWAddress:82:1e:1:e2:6c:bb ID:1,82:1e:1:e2:6c:bb Lease:0x662c2929}
I0815 15:38:41.296272 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.23 HWAddress:ca:94:90:33:e1:19 ID:1,ca:94:90:33:e1:19 Lease:0x66280395}
I0815 15:38:41.296295 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.22 HWAddress:ce:1e:18:a6:2c:dc ID:1,ce:1e:18:a6:2c:dc Lease:0x6616bb16}
I0815 15:38:41.296316 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.21 HWAddress:fe:f9:df:2b:92:fd ID:1,fe:f9:df:2b:92:fd Lease:0x660dbec9}
I0815 15:38:41.296337 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.20 HWAddress:3a:5c:a4:60:23:dd ID:1,3a:5c:a4:60:23:dd Lease:0x660dbde6}
I0815 15:38:41.296359 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.19 HWAddress:9e:5:a1:b4:2:91 ID:1,9e:5:a1:b4:2:91 Lease:0x65678416}
I0815 15:38:41.296380 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.18 HWAddress:e:9d:a6:2:8:1d ID:1,e:9d:a6:2:8:1d Lease:0x65144f88}
I0815 15:38:41.296400 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.17 HWAddress:a6:2a:9b:f7:48:39 ID:1,a6:2a:9b:f7:48:39 Lease:0x65049be4}
I0815 15:38:41.296423 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.16 HWAddress:46:c9:95:80:5a:b8 ID:1,46:c9:95:80:5a:b8 Lease:0x64cb063b}
I0815 15:38:41.296444 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.15 HWAddress:6e:e7:c0:43:e5:e9 ID:1,6e:e7:c0:43:e5:e9 Lease:0x64c883fe}
I0815 15:38:41.296466 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.14 HWAddress:46:17:5b:20:e9:c1 ID:1,46:17:5b:20:e9:c1 Lease:0x64c883c3}
I0815 15:38:41.296486 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.13 HWAddress:be:4a:e3:61:22:87 ID:1,be:4a:e3:61:22:87 Lease:0x64c88180}
I0815 15:38:41.296506 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.12 HWAddress:ca:b6:91:14:9a:8f ID:1,ca:b6:91:14:9a:8f Lease:0x64c88095}
I0815 15:38:41.296527 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.11 HWAddress:92:c5:4a:cb:14:16 ID:1,92:c5:4a:cb:14:16 Lease:0x64add676}
I0815 15:38:41.296549 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.10 HWAddress:ea:dc:7e:e:52:e5 ID:1,ea:dc:7e:e:52:e5 Lease:0x649b2d96}
I0815 15:38:41.296570 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.9 HWAddress:3e:57:12:b8:5a:54 ID:1,3e:57:12:b8:5a:54 Lease:0x6491d686}
I0815 15:38:41.296591 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.8 HWAddress:82:cd:d8:88:50:40 ID:1,82:cd:d8:88:50:40 Lease:0x648c992a}
I0815 15:38:41.296612 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.7 HWAddress:fa:f3:26:f5:4:4d ID:1,fa:f3:26:f5:4:4d Lease:0x648a0ada}
I0815 15:38:41.296633 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ae:62:32:86:d0:b ID:1,ae:62:32:86:d0:b Lease:0x648a0af2}
I0815 15:38:41.296653 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ce:d5:f8:63:17:83 ID:1,ce:d5:f8:63:17:83 Lease:0x64821151}
I0815 15:38:41.296674 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:b:8f:6e:42:1 ID:1,f6:b:8f:6e:42:1 Lease:0x646e4e12}
I0815 15:38:41.296695 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:1a:67:dd:6a:ac:b8 ID:1,1a:67:dd:6a:ac:b8 Lease:0x645a7ae8}
I0815 15:38:41.296718 63339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ca:96:e9:43:b9:43 ID:1,ca:96:e9:43:b9:43 Lease:0x646e4c8e}
^C
here is the full log lastStart.txt
update: running same known issue for Qemu fixes it https://minikube.sigs.k8s.io/docs/drivers/qemu/#socket_vmnet
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd
$ time mk start -d vfkit 😄 minikube v1.33.1 on Darwin 14.6.1 (arm64) ✨ Using the vfkit (experimental) driver based on user configuration 👍 Starting "minikube" primary control-plane node in "minikube" cluster 🔥 Creating vfkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... 🐳 Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...E0815 16:06:55.442876 71805 start.go:132] Unable to get host IP: HostIP not yet implemented for "vfkit" driver
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m27.299s user 0m1.218s sys 0m1.105s
I do see one Log spam error though:
start.go:132] Unable to get host IP: HostIP not yet implemented for "vfkit" driver
I do see one Log spam error though:
start.go:132] Unable to get host IP: HostIP not yet implemented for "vfkit" driver
These are hardcoded on driver name in minikube, so needs another case for vfkit
I think it can be copied from hyperkit
/ok-to-test
There are no tests for vfkit, but nothing else should have broken either
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 50.2s | 49.1s |
| enable ingress | 16.3s | 17.6s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.7s | 22.0s |
| enable ingress | 13.4s | 13.4s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.9s | 20.6s |
| enable ingress | 36.6s | 39.8s |
+----------------+----------+---------------------+
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Environment | Test Name | Flake Rate |
---|
Besides the following environments also have failed tests:
Hyperkit_macOS: 13 failed (gopogh)
KVM_Linux_crio: 31 failed (gopogh)
KVM_Linux: 13 failed (gopogh)
Docker_Linux_crio: 2 failed (gopogh)
QEMU_macOS: 156 failed (gopogh)
Docker_Linux_containerd_arm64: 1 failed (gopogh)
Docker_Linux_crio_arm64: 2 failed (gopogh)
To see the flake rates of all tests by environment, click here.
Would be great to add functional and integration tests for this in a follow up PR so we can find any bugs and accelerate this out of experimental
I would like the refactoring of the hyperkit driver, including the iso "ExtractFile", to go in a separate PR.
But that can be done first, if you don't want the technical debt of simply copying the files to this driver.
diff -rs pkg/drivers/hyperkit/iso.go pkg/drivers/vfkit/iso.go
17c17
< package hyperkit
---
> package vfkit
diff -rs pkg/drivers/hyperkit/iso_test.go pkg/drivers/vfkit/iso_test.go
17c17
< package hyperkit
---
> package vfkit
Files pkg/drivers/hyperkit/iso_test.iso and pkg/drivers/vfkit/iso_test.iso are identical
It (extracting kernel) is only needed because the new vfkit "bootloader" didn't work with minikube.iso (?)
With the EFI bootloader, it is supposed to be able to read /boot/bzimage and /boot/initrd (on macOS 13)
The pidfile handling was mostly a quick hack when the --pidfile
/--daemonize
was missing (unlike qemu)
Those features can be driven to the CRC organization which is handling the "vfkit" (not Podman team, actually)
https://github.com/crc-org/vfkit/issues
It is mostly developed by one developer
Note that the "libmachine" drivers are (potentially) separate gRPC programs.
They were not supposed to share too much code, beyond the libmachine API.
Now that it is all part of the monorepo (including the forked "external" drivers), there are possibilities to use more shared functions... The external binaries are now mostly used to separate runtime dependencies, like for hyperkit and kvm. For vfkit and qemu drivers, this size hit / build problem is taken by the vfkit and the qemu-system program and the driver can do a simple exec (it doesn't have to link with system libraries, like you needed for xHyve and libvirt)
There are pros and cons, but there is something simple (KISS) about just forking a "VirtualBox"-esque program and let it handle all the system dependencies. Unfortunately we don't have such a universal program anymore, but c'est la vie. And in the odd case you were wondering, in the Lima framework it is the "hostagent" that is doing the system linking to vz
. This is why it doesn't need the "vfkit" CLI, since it uses the "vz" API directly. With all upstream issues...
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 48.5s | 47.6s |
| enable ingress | 15.2s | 16.2s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.8s | 22.7s |
| enable ingress | 13.1s | 13.3s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 20.7s | 21.2s |
| enable ingress | 36.3s | 37.0s |
+----------------+----------+---------------------+
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Environment | Test Name | Flake Rate |
---|---|---|
Docker_Linux_crio (3 failed) | TestFunctional/parallel/ImageCommands/ImageBuild(gopogh) | 3.61% (chart) |
Besides the following environments also have failed tests:
Docker_Linux_containerd_arm64: 1 failed (gopogh)
KVM_Linux_crio: 11 failed (gopogh)
Docker_Linux_crio_arm64: 2 failed (gopogh)
Docker_Cloud_Shell: 5 failed (gopogh)
QEMU_macOS: 156 failed (gopogh)
To see the flake rates of all tests by environment, click here.
The driver probably needs a "Network" setting, in preparation for future options such as vmnet or gvproxy. Even if it starts out with only one option
hyperkit has the (network is always "vmnet", since it doesn't do cross-platform - like qemu does)
VpnKitSock: cfg.HyperkitVpnKitSock,
VSockPorts: cfg.HyperkitVSockPorts,
qemu has the (network is user or socket, as in https://wiki.qemu.org/Documentation/Networking)
Network: cc.Network,
SocketVMNetPath: cc.SocketVMnetPath,
SocketVMNetClientPath: cc.SocketVMnetClientPath,
vfkit options:
https://github.com/crc-org/vfkit/blob/main/doc/usage.md#networking
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 51.2s | 51.7s |
| enable ingress | 15.7s | 15.1s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 23.2s | 22.4s |
| enable ingress | 11.8s | 12.9s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.2s | 21.3s |
| enable ingress | 34.4s | 33.9s |
+----------------+----------+---------------------+
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Environment | Test Name | Flake Rate |
---|---|---|
KVM_Linux_containerd (1 failed) | TestAddons/parallel/Ingress(gopogh) | 0.58% (chart) |
KVM_Linux (4 failed) | TestGvisorAddon(gopogh) | 0.00% (chart) |
KVM_Linux (4 failed) | TestNoKubernetes/serial/Start(gopogh) | 0.00% (chart) |
KVM_Linux (4 failed) | TestFunctional/serial/ComponentHealth(gopogh) | 0.58% (chart) |
KVM_Linux (4 failed) | TestNoKubernetes/serial/StartNoArgs(gopogh) | 5.29% (chart) |
Docker_Linux_containerd_arm64 (2 failed) | TestStartStop/group/old-k8s-version/serial/SecondStart(gopogh) | 45.98% (chart) |
Besides the following environments also have failed tests:
Docker_Linux_crio_arm64: 2 failed (gopogh)
Hyperkit_macOS: 18 failed (gopogh)
KVM_Linux_crio: 32 failed (gopogh)
Docker_Cloud_Shell: 5 failed (gopogh)
QEMU_macOS: 94 failed (gopogh)
Docker_Linux_crio: 2 failed (gopogh)
To see the flake rates of all tests by environment, click here.
Would be great to add functional and integration tests for this in a follow up PR so we can find any bugs and accelerate this out of experimental
Added the standard "hack" integration script, not sure if there is anything more that needs to be done for testing?
@spowelljr
maybe hack/jenkins/minikube_set_pending.sh
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 47.5s | 48.8s |
| enable ingress | 15.4s | 15.1s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.9s | 23.0s |
| enable ingress | 13.5s | 13.3s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 22.0s | 22.0s |
| enable ingress | 37.8s | 39.3s |
+----------------+----------+---------------------+
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 50.2s | 48.9s |
| enable ingress | 15.0s | 16.2s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 22.3s | 22.3s |
| enable ingress | 12.3s | 11.5s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 22.0s | 21.7s |
| enable ingress | 39.3s | 37.3s |
+----------------+----------+---------------------+
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Environment | Test Name | Flake Rate |
---|---|---|
Docker_Linux_containerd_arm64 (2 failed) | TestStartStop/group/old-k8s-version/serial/SecondStart(gopogh) | 46.55% (chart) |
Docker_Linux_crio_arm64 (3 failed) | TestMultiControlPlane/serial/RestartCluster(gopogh) | 15.52% (chart) |
Besides the following environments also have failed tests:
Hyperkit_macOS: 16 failed (gopogh)
KVM_Linux_crio: 17 failed (gopogh)
QEMU_macOS: 156 failed (gopogh)
Docker_Linux_crio: 2 failed (gopogh)
Docker_Cloud_Shell: 5 failed (gopogh)
To see the flake rates of all tests by environment, click here.
@afbjorklund https://github.com/kubernetes/minikube/pull/19468 is merged and could be pull into this PR
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 47.8s | 48.2s |
| enable ingress | 14.7s | 14.9s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.3s | 23.5s |
| enable ingress | 13.2s | 13.4s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 20.6s | 21.7s |
| enable ingress | 35.2s | 39.2s |
+----------------+----------+---------------------+
Would be great to add functional and integration tests for this in a follow up PR so we can find any bugs and accelerate this out of experimental
Added the standard "hack" integration script, not sure if there is anything more that needs to be done for testing?
@spowelljr
maybe
hack/jenkins/minikube_set_pending.sh
Yeah, what you have should be good for integration, might be good to add functional tests as well, but could be done in a follow up PR.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: afbjorklund, medyagh
The full list of commands accepted by this bot can be found here.
The pull request process is described here
kvm2 driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 52.0s | 51.8s |
| enable ingress | 16.2s | 15.7s |
+----------------+----------+---------------------+
docker driver with docker runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 22.3s | 22.8s |
| enable ingress | 12.3s | 12.0s |
+----------------+----------+---------------------+
docker driver with containerd runtime
+----------------+----------+---------------------+
| COMMAND | MINIKUBE | MINIKUBE (PR 19423) |
+----------------+----------+---------------------+
| minikube start | 21.8s | 21.1s |
| enable ingress | 39.0s | 39.1s |
+----------------+----------+---------------------+
This driver works beautifully for running disconnected clusters. The problem is how to make it work with shared network like socket_vmnet, or maybe getting minikube or vfkit a apple.netwroking entitlement, like UTM has, so it can use bridged network.
I'm not sure if UTM built locally can use bridged network, or only UTM from Apple app store. But it would be awesome if we can get minikube or vfkit in the app store so it can use good networking - without adding any code.
It uses the new Virtualization.framework from macOS 11, instead of the older Hypervisor.framework (hvf) in QEMU.
Closes #12826
The "hyperkit" binary was bundled with Docker for Mac, but is available stand-alone:
https://github.com/moby/hyperkit
The "vfkit" binary is bundled with Podman Desktop (macOS), and is available stand-alone:
https://github.com/crc-org/vfkit