kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

Add multi-node support #94

Closed vishh closed 4 years ago

vishh commented 8 years ago

To provide a complete kubernetes experience, user might want to play & experience with features like scheduling and daemon set, etc. If minikube can emulate multiple kubernetes nodes, users can then use most of the kubernetes features.

This is probably not necessary in the first few versions of minikube. Once the single node setup is stable, we can look at emulating multiple nodes.

Up votes from users on this issue can be used as a signal to start working on this feature.

ernestoalejo commented 8 years ago

Other use case: This would allow to reproduce & play locally with failover scenarios (health checking configuration, scheduling in new machines, etc.) with high availability apps when a node fails or machine resources get exhausted. VM can be stopped manually to test for example.

pgray commented 7 years ago

This would be awesome as a feature in minikube, but for anyone looking for something passable in the meantime, this might help.

Is the scope of a feature like this large because minikube supports 5 virtualization types? I see the priority of P3 but I'm not sure if that means it's already being worked on or that there's enough work to do on other stuff that it's not worth trying to do yet.

marun commented 7 years ago

I don't think it's large. It could be as simple as running nodes running docker-in-docker as pods on the master node.

marun commented 7 years ago

It would be nice if minikube could be updated to use kubeadm to make adding new nodes easier. Any plans for that?

aaron-prindle commented 7 years ago

This might be something to look into regarding this: https://github.com/marun/nkube

fabiand commented 7 years ago

/me also thought about the kubeadm style of adding additional nodes

fabiand commented 7 years ago

Using kubeadm would also help to align with other K8s setups which eases debugging.

jellonek commented 7 years ago

https://github.com/Mirantis/kubeadm-dind-cluster solves this case. It also solves other cases for multi node setup needed during development process, listed in https://github.com/ivan4th/kubeadm/blob/27edb59ba62124b6c2a7de3c75d866068d3ea9ca/docs/proposals/local-cluster.md Also it does not require any VM during the process.

There is also a demo of virtlet based on it, which shows how in simple steps you can start multi node setup, patch one node with injected image for daemonset for CRI runtime, and then start example pod on it. All this you can read in https://github.com/Mirantis/virtlet/blob/master/deploy/demo.sh

MichielDeMey commented 7 years ago

@pgray I've used that setup for a long time but it looks like they won't support K8s 1.6+ 😞 https://github.com/coreos/coreos-kubernetes/issues/881

nukepuppy commented 7 years ago

definitely thing it should be a target to use minikube for that.. like minikube start nodes=3 etc etc... i dunno haven't looked at backend but it will fill a tremendous gap right now for developing from desktop to production in the same fashion which will pay for itself in adoption faster than other things.

ccampo commented 7 years ago

I am using Mac and I can already bring up a second Minikube with "minikube start --profile=second". (using VirtualBox) So all I am missing is a way to connect the two so that the default minkube can now also deploy to the second (virtual)node.

MichielDeMey commented 7 years ago

@ccampo I believe that spins up a second cluster, not a second node?

ccampo commented 7 years ago

So the difference is basically that both Minikube instances have their own master (API server etc). So if the second minikube could use the master of the first minikube, that would get me closer to my goal right ?

MichielDeMey commented 7 years ago

Yes, basically. You can however use kubefed (https://kubernetes.io/docs/concepts/cluster-administration/federation/) to manage multiple clusters since k8s 1.6.

ccampo commented 7 years ago

ok I will look at federation thanks. Is there an easy way that you know off to make the second cluster or node use the api of the first cluster ?

fabiand commented 7 years ago

Kube fed manages independent clusters, right? But isn't the goal here to create a single cluster with multiple VMs?

MichielDeMey commented 7 years ago

@fabiand Correct, but it seems I've derailed it a bit, apologies. :) @ccampo I'm not very familiar with the internals of Kubernetes (or Minikube) but I know for a fact that it's possible to have multiple master nodes in a cluster setup.

You might want to look at https://github.com/kelseyhightower/kubernetes-the-hard-way if you're interested in the internals and want to get something working.

PerArneng commented 6 years ago

This would be very nice to be able to play with scalability across nodes in an easy way.

pbitty commented 6 years ago

I made a multi-node prototype in #2539, if anyone is interested in seeing one way it could be implemented, using individual VMs for each node.

pbitty commented 6 years ago

Demo here: asciicast

YiannisGkoufas commented 6 years ago

Hi there @pbitty , great job! I build it, start the master but when adding 1 worker it fails with:

~/go/src/k8s.io/minikube$ out/minikube node start
Starting nodes...
Starting node: node-1
Moving assets into node...
Setting up certs...
Joining node to cluster...
E0510 13:03:34.368403    3605 start.go:63] Error bootstrapping node:  Error joining node to cluster: kubeadm init error running command: sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443: Process exited with status 2

Any idea how I can debug it? Thanks!

pbitty commented 6 years ago

Hi @YiannisGkoufas, you can ssh into the node with

out/minikube node ssh node-1

and then try to run the same comment from the shell:

sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443

(It would be great if the log message contained the command output. I can't remember why it doesn't. I think it would have required some refactoring and the PoC was a bit of a hack with minimal refactoring done.)

YiannisGkoufas commented 6 years ago

Thanks! Didn't realize you could ssh into the node that way. So I tried:

sudo /usr/bin/kubeadm join --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Then added the --ignore-preflight-errors parameter and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue

Then I added the suggested flag and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443 --discovery-token-unsafe-skip-ca-verification

I got:

[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.99.100:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:8443"
[discovery] Failed to request cluster info, will try again: [Unauthorized]
[discovery] Failed to request cluster info, will try again: [Unauthorized]
...

Can't figure out what to try next. Thanks again!

gauthamsunjay commented 6 years ago

@YiannisGkoufas out/minikube start --kubernetes-version v1.8.0 --bootstrapper kubeadm worked for me. I think I was facing the same issue as you and it looks like by default the bootstrapper used is localkube. Basically kubeadm init was not happening on master. Hence, we were not able to add worker nodes. Hope this helps! Thanks @pbitty

natiki commented 6 years ago

In case anyone is still looking for a solution: https://stackoverflow.com/a/51706547/223742

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/minikube/issues/94#issuecomment-453743687): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
kamilgregorczyk commented 5 years ago

Stale? 363 ppl have upvoted that idea...

afbjorklund commented 5 years ago

@kamilgregorczyk: The implementation (#2539) is rather stale at this point, but the idea isn't (completely).

It is being planned to return for the roadmap of 2019, see: #4 support all k8s features

kubeadm does most of the work (join) for us...

However, it will not make it for the 1.0 release

It will also require some extra resources* to run.

tuxillo commented 5 years ago

I wonder if a bounty would help :)

ccampo commented 5 years ago

I think it would be easier to know what the right path (bounty or not bounty) is if we decide on what the solution is. While one of my previous comments aimed at having a minikube with multiple nodes I am no longer sure that this is the optimal solution. I could also view it as minikube is a good solution in its current scope and we are looking for something else, a multikube with the objective of running kubernetes on multiple nodes on none-linux OS systems. Something that you do on Linux with kubeadm for the Mac and Windows platform. Maybe its possible to reuse part of Minikube for that or maybe not.

dankohn commented 5 years ago

Note that https://github.com/kubernetes-sigs/kind targets many of the uses described here.

tstromberg commented 5 years ago

FYI - this feature is part of the minikube 2019 roadmap: https://github.com/kubernetes/minikube/blob/master/docs/contributors/roadmap.md

We really want to do this. It's going to be a substantial bit of work to sort out, but If anyone wants to start, I would be very happy to help lead them in the right direction. The prototype in #2539 is definitely worth taking a look at.

Help wanted!

afbjorklund commented 5 years ago

This feature (multi-node), is going to have two different implementations. One is the straight-forward approach of just starting more than one virtual machine, and running the bootstrapper on each one. The other approach, which is described in #4389, is where we don't start any virtual machine but just run all the pods and containers locally. The containers are separated by labels or similar, for the "nodes".

Both have their use cases. Current users of minikube are quite used to being able to ssh into the node, and to use various kernel modules on the node etc. But when you are using minikube on Linux (either on the laptop, or by starting a virtual machine just for development desktop or for hosting your container runtime), having additional virtual machines running adds runtime overhead and resource requirements.

Minikube is eventually going to support all four scenarios:

* Some people are trying to do this today, by using the none driver to run the bootstrapper on localhost. Due to the total lack of isolation, this is not going to work for multi-node (and is not recommended for single-node either, unless you give it a dedicated virtual machine to run "locally" on - like in a CI environment or similar). At least on the Kubernetes level, it needs to give the appearance of actually having multiple nodes.


This describes the scenarios of Minikube, which is all about providing a local development experience. 💻 However if you want to use Kubernetes, you have several other deployment options available as well... As long as everything is using the standard distribution (kubelet) and the standard bootstrapper (kubeadm) it should provided a seamless transition and similar experience. But it is not supported or described here.

Instead see:

ghostsquad commented 4 years ago

2019 is almost over. Any movement on this?

afbjorklund commented 4 years ago

@ghostsquad : I was meaning to resume work on at least resurrecting the old functionality in #2539. But got side-tracked with some other development, such as other runtimes and other architectures.

However, it is still planned to-do. Running multiple VMs, and running minikube in Docker, is next up. Hopefully we should have an updated prototype ready in a couple of weeks, as in "November" ('19)

chinafzy commented 4 years ago

watching this subject.

afbjorklund commented 4 years ago

You can see an early demo of the feature in the KubeCon NA 2019 talk, so work on it has been resumed (although not by me) and soon out

kamilgregorczyk commented 4 years ago

It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521

andersthorsen commented 4 years ago

It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521

A drawback with multipass is that they do not support Windows Server as the OS...

tstromberg commented 4 years ago

@sharifelgamal is busy hacking away on this feature as we speak. Should land next month.

ghostsquad commented 4 years ago

@andersthorsen Host or Guest OS?

andersthorsen commented 4 years ago

@ghostsquad as host os. They support Windows 10 as host os tough.

MartinKaburu commented 4 years ago

Is this still being developed? I've been waiting and following for ages

sharifelgamal commented 4 years ago

@MartinKaburu yes, I'm actively working on this.

MartinKaburu commented 4 years ago

@sharifelgamal do you need a hand on this?

sharifelgamal commented 4 years ago

Experimental multi-node support will be available in the upcoming 1.9 release and will be available in the next 1.9 beta as well.

yusufharip commented 4 years ago

Hey @sharifelgamal i'm running minikube v1.9.0 on MacOS Catalina and get this error

$ minikube node add 🤷 This control plane is not running! (state=Stopped) ❗ This is unusual - you may want to investigate using "minikube logs" 👉 To fix this, run: minikube start

first install minikube with this command $ minikube start --driver=docker