kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.33k stars 4.88k forks source link

Running minikube in production is [not] stupid? #10097

Closed frankgerhardt closed 3 years ago

frankgerhardt commented 3 years ago

We have a small application to run in production and have to decide between OKD OpenShift and Minikube. From local development we really like minikube but we know it is not recommended for production.

We like especially the simplicity of minikube and we think we can understand, handle and master it.

We would use single-master and mutliple workers.

But if we would be so "crazy" to use minikub for production, what downsides would be have to expect? Are there any known issues besides the expected, which is that it is just minikube with fewer features.

afbjorklund commented 3 years ago

This is something that needs better documentation...

It is supposed to be easy to "transition" from the initial experience with minikube, over to the container drivers and onto kubeadm. So that you can apply all the Kubernetes things that you picked up when learning, when later moving into production ?

Normally one doesn't recommend single-node for deployment, but having two nodes probably "works" for a simple service setup. In the case of an outage, the second node could be used for redeployment (after downtime). Probably need 4+ nodes, otherwise.

I want to support using minikube for setting up nodes.

When using the so called "generic" driver, you just give it the SSH access to the machine where you want it to install. So it also depends on where you plan to deploy your cluster, and what type of nodes you are planning to use for it ?

The basic steps would be the same: 1) Create a machine 2) Provision container runtime 3) Bootstrap kubernetes node This is if you handle the servers yourself, obviously it would be a bit different if using a managed Kubernetes service.


Comparing between Minishift and Minikube is not so easy, things changed a lot with OpenShift 4 and CoreOS / Machine operator.

With OKD 4, you will deploy a local "cloud" for hosting and then the nodes will be provisioned on that - by using Kubernetes itself.

EDIT: It actually seems it supposed both ways of installing: "User Provided Infrastructure" (the regular / traditional way) "Installer Provided Infrastructure" (the cloud / operator way). All control plane nodes must run the new CoreOS, either way.

It also seems that multi-master (HA) is required, and that it needs an additional provisioning node for bootstrapping the cluster from.

But if you are interested in OpenShift, it would be better to ask CRC than to ask minikube - which is more about Kubernetes

afbjorklund commented 3 years ago

I was comparing these two documents:

These environments are for development:

frankgerhardt commented 3 years ago

Thanks for your response. We would handle the servers ourselves, either dedicated or cloud servers. I don't want to look into OKD too much if a simple setup can be done with minikube. I'm mostly interested in issued I could encounter with minikube.

It it clear that with one master there can not be 0 downtime HA but that's ok. We'd put up a maintenance page hopefully just for a a short time and restore the master from a backup of make a fresh install in configure it the same way as before (gitops).

afbjorklund commented 3 years ago

Well, the default minikube would deploy a kubernetes cluster on localhost. That is probably of limited value to you.

So you want "something" that can be deployed remotely, usually by adding packages and configuration over ssh... There are more details here: https://github.com/kubernetes/minikube/issues/4733 Not in minikube yet, but I hope that the feature will be accepted any year now.

The main limitations would be in terms of network and storage, the multi-node support in minikube is still rather basic.

afbjorklund commented 3 years ago

Seems like the resource requirements are a bit different as well.

OpenShift

The smallest OKD clusters require the following hosts:

  • One temporary bootstrap machine
  • Three control plane, or master, machines
  • At least two compute machines, which are also known as worker machines
Machine Operating System vCPU Virtual RAM Storage
Bootstrap FCOS 4 16 GB 120 GB
Control plane FCOS 4 16 GB 120 GB
Compute FCOS or RHEL 7.6 2 8 GB 120 GB

Kubernetes

To follow this guide, you need:

  • One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
  • 2 GiB or more of RAM per machine--any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.

CentOS is deprecated, so would probably go with something vanilla like Ubuntu LTS

Would recommend 4GB RAM and 4 CPU, though. Even a Raspberry Pi has that now.

In minikube we currently allocate 20GB storage by default, and use around 2GB of it

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        17G  1.4G   15G   9% /mnt/sda1
tstromberg commented 3 years ago

Production usage of minikube is explicitly mentioned as a non-goal for minikube: https://minikube.sigs.k8s.io/docs/contrib/principles/#non-goals

While nothing will explicitly break if you attempt to use minikube in production, minikube does not attempt to support the full suite of security mechanisms and networking configurations you may wish to use in production. YMMV.

IMHO, since minikube is just a wrapper over kubeadm, you would be better off using kubeadm directly.

medyagh commented 3 years ago

@frankgerhardt I agree with @tstromberg I strongly refuse to support the idea we are made for production howeever we dont intentionally make things not production. we have some saftey measurements (such as generating certs per cluster and ips) but thats not what we test minikube for.

security wise we dont have the bandwith to support ta secure environment for production.

if you have more questions, please feel free to re-open this.

afbjorklund commented 3 years ago

IMHO, since minikube is just a wrapper over kubeadm, you would be better off using kubeadm directly.

This is why I labeled it as "documentation", we should redirect users there when looking to "move on"


It is supposed to be easy to "transition" from the initial experience with minikube, over to the container drivers and onto kubeadm. So that you can apply all the Kubernetes things that you picked up when learning, when later moving into production ?


Like so: minikube -> kind -> kubeadm

All part of Kubernetes, all SIG ?