kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.21k stars 4.87k forks source link

docker driver: minikube will not come up "running" after docker desktop quit/start #9376

Open medyagh opened 3 years ago

medyagh commented 3 years ago

I just found an interesting behaviour in minikube, after we start a cluster with docekr driver, when we Quit Docker Desktop and then start Docker Desktop Again... minikube docker container stays stopped...

medya@~/workspace/minikube (docker_retry) $ minikube-v1.13.2 status
...

host: Nonexistent
...

that can be fixed by adding this arugment to our Kic driver docker run

--restart=on-failure:1

however that makes the Container Running but our Kubelet and apiservver stays Stopped... we might need to enable Kublet or apiserver to run on start .

medya@~/workspace/minikube (docker_retry) $ ECHO "STARTING DOCKER DESKTOP"
STARTING DOCKER DESKTOP
medya@~/workspace/minikube (docker_retry) $ make && ./out/minikube status
make: `out/minikube' is up to date.
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured
medyagh commented 3 years ago

I confirm if we start Kubelet service it will not let the apiserver to be dying after docker desktop quit/start. (combined with adding --restart=on-failure:1 flag to our docker run)

BUT in our code base we have this line that warns us not to enable Kubelet.

please don't enable kubelet as it creates a race condition; if it starts on systemd boot it will pick up /etc/hosts before we have time to configure /etc/hosts"

https://github.com/medyagh/minikube/blob/cef5f180817acb4142a0ff13553e19db0a81b210/pkg/minikube/sysinit/systemd.go#L58-L59

we need to find a solution that after docker container exits (because of docker desktop quiting) then container comes back, we need to be able to start kubelet

what would be best solution ?

medyagh commented 3 years ago

one possible solution would be, each time user runs minikube status, if we find out kubelet is not running we try to start it... (not an elegant solution)

afbjorklund commented 3 years ago

This is the same behaviour as with the VM. If you stop and start your VM, kubernetes will be "dead" until you run minikube start. So as missing feature ?

afbjorklund commented 3 years ago
$ ./out/minikube start
😄  minikube v1.13.1 on Ubuntu 20.04
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing virtualbox VM for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default
$ ./out/minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

$ VBoxManage controlvm minikube poweroff
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
$ VBoxManage startvm --type headless minikube
Waiting for VM "minikube" to power on...
VM "minikube" has been successfully started.
$ ./out/minikube status
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

So it has "always" been this way, like a shortcoming in the minikube design ?

We don't install the needed files for the kubelet, to start automatically after boot.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

medyagh commented 3 years ago
$ ./out/minikube start
😄  minikube v1.13.1 on Ubuntu 20.04
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing virtualbox VM for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default
$ ./out/minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

$ VBoxManage controlvm minikube poweroff
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
$ VBoxManage startvm --type headless minikube
Waiting for VM "minikube" to power on...
VM "minikube" has been successfully started.
$ ./out/minikube status
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Configured

So it has "always" been this way, like a shortcoming in the minikube design ?

We don't install the needed files for the kubelet, to start automatically after boot.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd

should we add that there?

I know that @tstromberg had removed kubelet automatically coming up by default because of many problems we had with it ...

EraYaN commented 3 years ago

This also breaks for example when docker is updated on linux. And to add insult to injury when I then try minikube start it reports:

Unable to restart cluster, will reset it: getting k8s client: tls: failed to find any PEM data in certificate input

Which is a bit of a pain.