pires / kubernetes-vagrant-coreos-cluster

Kubernetes cluster (for testing purposes) made easy with Vagrant and CoreOS.
Apache License 2.0
597 stars 205 forks source link
cluster coreos kubernetes vagrant

kubernetes-vagrant-coreos-cluster

Turnkey Kubernetes cluster setup with Vagrant 2.1.1+ and CoreOS.

If you're lazy, or in a hurry, jump to the TL;DR section.

Pre-requisites

MacOS X

On MacOS X (and assuming you have homebrew already installed) run

brew install wget

Windows

Deploy Kubernetes

Current Vagrantfile will bootstrap one VM with everything needed to become a Kubernetes master and, by default, a couple VMs with everything needed to become Kubernetes worker nodes. You can change the number of worker nodes and/or the Kubernetes version by setting environment variables NODES and KUBERNETES_VERSION, respectively. You can find more details below.

vagrant up

Linux or MacOS host

Kubernetes cluster is ready. Use kubectl to manage it.

Windows host

On Windows systems, kubectl is installed on the master node, in the /opt/bin directory. To manage your Kubernetes cluster, ssh into the master node and run kubectl from there.

vagrant ssh master
kubectl cluster-info

Clean-up

vagrant destroy

If you've set NODES or any other variable when deploying, please make sure you set it in vagrant destroy call above, like:

NODES=3 vagrant destroy -f

Notes about hypervisors

Virtualbox

VirtualBox is the default hypervisor, and you'll probably need to disable its DHCP server

VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0

Parallels

If you are using Parallels Desktop, you need to install vagrant-parallels provider

vagrant plugin install vagrant-parallels

Then just add --provider parallels to the vagrant up invocations above.

VMware

If you are using one of the VMware hypervisors you must buy the matching provider and, depending on your case, just add either --provider vmware_fusion or --provider vmware_workstation to the vagrant up invocations above.

Private Docker Repositories

If you want to use Docker private repositories look for DOCKERCFG bellow.

Customization

Environment variables

Most aspects of your cluster setup can be customized with environment variables. Right now the available ones are:

So, in order to start, say, a Kubernetes cluster with 3 worker nodes, 4GB of RAM and 4 vCPUs per node one just would run:

NODE_MEM=4096 NODE_CPUS=4 NODES=3 vagrant up

or with Kubernetes UI:

NODE_MEM=4096 NODE_CPUS=4 NODES=3 USE_KUBE_UI=true vagrant up

Please do note that if you were using non default settings to startup your cluster you must also use those exact settings when invoking vagrant {up,ssh,status,destroy} to communicate with any of the nodes in the cluster as otherwise things may not behave as you'd expect.

Synced Folders

You can automatically mount in your guest VMs, at startup, an arbitrary number of local folders in your host machine by populating accordingly the synced_folders.yaml file in your Vagrantfile directory. For each folder you which to mount the allowed syntax is...

# the 'id' of this mount point. needs to be unique.
- name: foobar
# the host source directory to share with the guest(s).
  source: /foo
# the path to mount ${source} above on guest(s)
  destination: /bar
# the mount type. only NFS makes sense as, presently, we are not shipping
# hypervisor specific guest tools. defaults to `true`.
  nfs: true
# additional options to pass to the mount command on the guest(s)
# if not set the Vagrant NFS defaults will be used.
  mount_options: 'nolock,vers=3,udp,noatime'
# if the mount is enabled or disabled by default. default is `true`.
  disabled: false

ATTENTION: Don't remove /vagrant entry.

TL;DR

vagrant up

This will start one master and two worker nodes, download Kubernetes binaries start all needed services. A Docker mirror cache will be provisioned in the master, to speed up container provisioning. This can take some time depending on your Internet connection speed.

Please do note that, at any time, you can change the number of worker nodes by setting the NODES value in subsequent vagrant up invocations.

Usage

Congratulations! You're now ready to use your Kubernetes cluster.

If you just want to test something simple, start with [Kubernetes examples] (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/).

For a more elaborate scenario [here] (https://github.com/pires/kubernetes-elasticsearch-cluster) you'll find all you need to get a scalable Elasticsearch cluster on top of Kubernetes in no time.

Troubleshooting

Vagrant displays a warning message when running!

Vagrant 2.1 integrated support for triggers as a core functionality. However, this change is not compatible with the vagrant-triggers community plugin we were and still are using. Since we require this plugin, Vagrant will show the following warning:

WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.

This warning is harmless and only means that we are using the community plugin instead of the core functionality. To disable it, set the VAGRANT_USE_VAGRANT_TRIGGERS environment variable to false before running vagrant:

$ VAGRANT_USE_VAGRANT_TRIGGERS=false NODES=2 vagrant up

I'm getting errors while waiting for Kubernetes master to become ready on a MacOS host!

If you see something like this in the log:

==> master: Waiting for Kubernetes master to become ready...
error: unable to load file "temp/dns-controller.yaml": unable to connect to a server to handle "replicationcontrollers": couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: i/o timeout
error: no objects passed to create

You probably have a pre-existing Kubernetes config file on your system at ~/.kube/config. Delete or move that file and try again.

I'm getting errors while waiting for mounting to /vagrant on a CentOS 7 host!

If you see something like this in the log:

mount.nfs: Connection timed out.

It might be caused by firewall, you can check if firewall is active with 'systemctl status firewalld', if yes, you can use 'systemctl stop firewalld' simply.

Kubernetes Dashboard asks for either a Kubeconfig or token!

This behavior is expected in latest versions of the Kubernetes Dashboard, since different people may need to use the Kubernetes Dashboard with different permissions. Since we deploy a service account with administrative privileges you should just click Skip. Everything will work as expected.

Licensing

This work is open source, and is licensed under the Apache License, Version 2.0.