Closed adam-pye closed 1 year ago
These are the logs for the second re(start), the docker error should be shown in the first start (or after a delete)
Whats the best way to get these logs?
➜ ~ minikube logs --file=logs.txt 🤷 The control plane node must be running for this command 👉 To start a cluster, run: "minikube start"
So i ran the following to get the new logs attached
➜ ~ minikube delete --all --purge 🔥 Deleting "minikube" in docker ... 🔥 Removing /Users/adampye/.minikube/machines/minikube ... 💀 Removed all traces of the "minikube" cluster. 🔥 Successfully deleted all profiles 💀 Successfully purged minikube directory located at - [/Users/adampye/.minikube] 📌 Kicbase images have not been deleted. To delete images run: ▪ docker rmi gcr.io/k8s-minikube/kicbase:v0.0.30 gcr.io/k8s-minikube/kicbase:<none>
followed by
minikube start --driver=docker --force-systemd=true
and minikube logs --file=logs.txt
Hopefully this now has the info you need
This seems to be the same or very similar to an issue I've had crop up in the past couple days. Also on M1 Apple Silicon running the same Minikube version.
Getting: iptables-save v1.8.4 (legacy): Cannot initialize: iptables who? (do you need to insmod?)
as well. Have uninstalled/reinstalled several times. Factory reset Docker. Nothing has worked. Google has not yet been of any help...
@afbjorklund any help with this would be great? I haven't found any solution to yet
@elitwilson I also tried with podman and got a similar error with iptables
But i've just tried using multipass following this doc - https://www.materialized.eu/kubernetes/minikube-on-ubuntu-in-multipass-vm-on-m1-mac/
Works well so far
I've got the exact same issue. Checking the docker container shows iptables-save v1.8.4 (legacy): Cannot initialize: iptables who? (do you need to insmod?)
.
I have the following details: OS Monterey 12.3.1 Chip Apple M1 Pro Docker Desktop 4.7.1 minikube v1.25.2
I've tried the voted answer here without any luck.
If the Docker (LinuxKit) kernel doesn't have the same iptables support as before, it is possible that something needs to change in minikube's "docker" driver
Similar to:
You can try, if "kind" works ?
I just tried kind create cluster
and the container also crashes with the following logs:
INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v2
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: setting iptables to detected mode: legacy
iptables-save v1.8.7 (legacy): Cannot initialize: iptables who? (do you need to insmod?)
INFO: ensuring we can execute mount/umount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: detected cgroup v2
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: setting iptables to detected mode: legacy
iptables-save v1.8.7 (legacy): Cannot initialize: iptables who? (do you need to insmod?)
Same exact iptables issue. One common denominator seems to be the Apple M1 chip architecture. Any thoughts for a work-around in the short-term?
Perhaps verify the logic from https://github.com/kubernetes-sigs/kind/pull/2289/commits/45c5aa40234752cdb65fd353e553ff13f0945c13
And compare iptables-legacy-save
and iptables-nft-save
output
Seems the root cause of this issue, is trying to run an amd64 system container on a arm64 system.
Possibly due to setting some global DOCKER_DEFAULT_PLATFORM, instead of using the default.
Seems the root cause of this issue, is trying to run an amd64 system container on a arm64 system.
Possibly due to setting some global DOCKER_DEFAULT_PLATFORM, instead of using the default.
Wow. This was it for me. I had set DOCKER_DEFAULT_PLATFORM=linux/amd64. Deleted that variable and Minikube starts up now.
Thanks so much Anders.
Hi @adam-pye, does the solution that @afbjorklund suggested above help in your case as well?
Seems the root cause of this issue, is trying to run an amd64 system container on a arm64 system. Possibly due to setting some global DOCKER_DEFAULT_PLATFORM, instead of using the default.
Hi @adam-pye – is this issue still occurring? Are additional details available? If so, please feel free to re-open the issue by commenting with /reopen
. This issue will be closed as additional information was unavailable and some time has passed.
Additional information that may be helpful:
Whether the issue occurs with the latest minikube release
The exact minikube start command line used
Attach the full output of minikube logs, run minikube logs --file=logs.txt
to create a log file
Thank you for sharing your experience!
@Mohammad-Ali-Rauf: You can't reopen an issue/PR unless you authored it or you are a collaborator.
What Happened?
OS Monterey 12.2.1 Chip Apple M1 Pro Docker Desktop 4.6.0 minikube v1.25.2 on Darwin 12.2.1 (arm64)
minikube start --driver=docker --force-systemd=true
failing to start minikube. Seems to be an issue with iptables?I've tried running
minikube delete --all --purge
first but it makes no difference.Any ideas?
Attach the log file
log.txt
Operating System
macOS (Default)
Driver
Docker