Closed irizzant closed 4 years ago
If I recall, this issue happens because CNI is enabled, but no CNI has been loaded into Kubernetes yet, which I guess needs to happen before CoreDNS deploys are successful.
The workaround I've seen others use is side-loading a CNI while minikube is starting, but that isn't very friendly. This is certainly something we'll need to address for multi-node.
Also related: #7459
I have a feeling if we install a CNI by default if the run time is not Docker simmilar to what we do in Docker and Podman Driver this would be solved
https://github.com/kubernetes/minikube/blob/master/pkg/minikube/bootstrapper/kubeadm/kubeadm.go#L235
related or possible dupe: https://github.com/kubernetes/minikube/issues/7428
I belive I found out the root cause of this issue, the problem is, if extra-opts is sets it will replace the minikube extra args for kic overlay I tried locally just by adding:
--extra-config=kubeadm.ignore-preflight-
both containerd and crio will be broken.
this should be an easy fix
@irizzant do you mind sharing what runtime do you use ? are u using docker or containerd or crio ?
on observation when we specifiy extra-options the required kic overlay extra options get over-written
minikube start -p p2 --memory=2200 --alsologtostderr -v=3 --wait=true --container-runtime=containerd --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker --kubernetes-version=v1.15.7
{
"Name": "p2",
"KeepContext": false,
"EmbedCerts": false,
"MinikubeISO": "",
"Memory": 2200,
"CPUs": 2,
"DiskSize": 20000,
"Driver": "docker",
"HyperkitVpnKitSock": "",
"HyperkitVSockPorts": [],
"DockerEnv": null,
"InsecureRegistry": null,
"RegistryMirror": null,
"HostOnlyCIDR": "192.168.99.1/24",
"HypervVirtualSwitch": "",
"HypervUseExternalSwitch": false,
"HypervExternalAdapter": "",
"KVMNetwork": "default",
"KVMQemuURI": "qemu:///system",
"KVMGPU": false,
"KVMHidden": false,
"DockerOpt": null,
"DisableDriverMounts": true,
"NFSShare": [],
"NFSSharesRoot": "/nfsshares",
"UUID": "",
"NoVTXCheck": false,
"DNSProxy": false,
"HostDNSResolver": true,
"HostOnlyNicType": "virtio",
"NatNicType": "virtio",
"KubernetesConfig": {
"KubernetesVersion": "v1.15.7",
"ClusterName": "p2",
"APIServerName": "minikubeCA",
"APIServerNames": null,
"APIServerIPs": null,
"DNSDomain": "cluster.local",
"ContainerRuntime": "containerd",
"CRISocket": "",
"NetworkPlugin": "cni",
"FeatureGates": "",
"ServiceCIDR": "10.96.0.0/12",
"ImageRepository": "",
"ExtraOptions": [
{
"Component": "kubeadm",
"Key": "ignore-preflight-errors",
"Value": "SystemVerification"
}
],
"ShouldLoadCachedImages": true,
"EnableDefaultCNI": true,
"NodeIP": "",
"NodePort": 0,
"NodeName": ""
},
"Nodes": [
{
"Name": "m01",
"IP": "172.17.0.2",
"Port": 8443,
"KubernetesVersion": "v1.15.7",
"ControlPlane": true,
"Worker": true
}
],
"Addons": null,
"VerifyComponents": {
"apiserver": true,
"default_sa": true,
"system_pods": true
}
Hey @irizzant -- are you still seeing this issue? If so, could you let us know which container runtime (docker (deafult)/containerd/crio) you are using?
Hi @priyawadhwa I'm on:
minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
and starting with --network-plugin=cni --enable-default-cni
does not make minikube crashing anymore.
By the way I'm using docker as container runtime.
Steps to reproduce the issue:
kube-system
:Describe the CoreDNS pods:
Full output of failed command: Minikube start:
CoreDNS status:
Optional: Full output of
minikube logs
command: