metral / corekube

CoreOS + Kubernetes + OpenStack - The simplest way to deploy a POC Kubernetes cluster using a Heat template
Apache License 2.0
7 stars 0 forks source link

Corekube

Last Major Update: 08/18/2016

Latest Release

TL;DR

Corekube is a Heat template for OpenStack that creates the necessary infrastructure as well as configures it accordingly to deploy and host a Kubernetes cluster.

The main pillars of Corekube's mission can be categorized into the following:

  1. Infrastructure (a.k.a "infra") - Includes the creation, provisioning and installation of the underlying cloud infrastructure and tools required to setup a Kubernetes cluster. (This is the heat template itself)
  2. Discovery - Utilizes Etcd to enable a private discovery service amongst the servers.
  3. Overlord - The deployment logic that consumes the infrastructure created, as well as the cluster information coordinated by the 'Discovery' node, to deploy Kubernetes onto the servers.

Component Versions

CoreOS - OS running on all nodes

Tool Version
CoreOS 1010.5.0 (Stable)
Docker 1.10.3
Etcd (Client) 0.4.9/2.3.1
Fleet (Client) 0.11.7
Flannel* 0.5.5

*Note: Flannel is running on all Kubenetes Nodes(minions) only. It is not installed nor configured on neither the Discovery or Overlord.

Kubernetes - Management layer for containerized appliations

Tool Version
Kubernetes 1.4.0

Overlord - Deployment logic that stands up a Kubenetes cluster

Tool Version
Etcd API v2
Fleet API v1
Kubernetes API v1

Quick Links

Contributing

Please create all pull requests against the 'dev' branch. For stable versions, please go off of the releases >= v0.3.5

See HACKING for more information.

Original Blog Post (Sept. 10, 2014) Outdated

Full Blog Post on Rackspace Developer Blog

Corekube Deployment

heat stack-create corekube --template-file corekube-cloudservers.yaml -P keyname=<RAX_SSH_KEYNAME>

Using Kubernetes on the Corekube Deployment

How to run examples on Kubernetes


Introduction

Corekube is an adapation of CoreOS's Running Kubernetes Example on CoreOS blog post from July 2014.

"Running Kubernetes Example on CoreOS" describes the installation of Kubernetes with VMWare, some manual network configuration & the deployment of the Kubernetes master & minion stacks via cloud-config.

Corekube has a heavy foundation on the CoreOS post, but with a couple of modifications:

Setup & Installation

Heat Template

At the helm of CoreKube is an OpenStack Heat template that lays down the following infastructure in an ordered fashion, with discovery & priv_network being built first, and then kubernetes-master-x, kubernetnes-minion-y and overlord afterwords.

Discovery

The first step in Corekube's process, after server instantiation, is to create a private discovery node using etcd in a Docker container. This private instance of etcd is used for service coordination/discovery and is eventually used by the Overlord to know what machines are in the cluster and then to deploy the kubernetes stacks onto the master and minion nodes.

The discovery service is provided by the coreos/etcd Docker repo with a unique cluster UUID generated at creation via Heat. This is no different than CoreOS's https://discovery.etcd.io service as described in CoreOS's Cluster Discovery post, other than the fact that it is private to this deployment.

This discovery node's IP, combined with the cluster UUID, are used to assemble the complete discovery path needed by the etcd & fleet client services running by default on the rest of the infrastructure, as these binaries come with CoreOS.

Therefore, when the rest of the cluster connects to the discovery path of our private discovery node, the Overlord will then have the information necessary to deploy the Kubernetes role stack onto the designated machine.

Networking

Once each CoreOS machine has booted & connected their etcd client to the private discovery node, a network architecture must be established for the containers. Kubernetes requires that each master/minion node have their own subnet to be used for the containers it manages. We will therefore setup a CIDR in which all Kubernetes hosts will co-exist on, have their own subnet in said CIDR, and this network will be overlayed onto the Rackspace Cloud Network ("priv_network") that we created earlier for isolated communication.

In order to understand the proposed networking architecture described, we must first understand at a high-level how networking works with regards to Docker:

Now that we know how containers on Docker talk to other containers on the same host, we need to figure out how to allow containers on different hosts to have the same capability; specifically, when using a Rackspace Cloud Network, as it provides the servers an additional & isolated Layer 2 network.

To allow the containers to communicate with each other via its Kubernetes host machine (which has an interface on the isolated layer 2 network after we create it), there must be some sort of networking mechanism to allow for it.

However, its worth noting that on a Rackspace Cloud Network, MAC filtering is performed and cannot be disabled; therefore, any traffic that originates on the docker0 linux bridge by the container will not be able to inherently communicate with the rest of the docker0 linux bridges in the rest of the cluster.

Fortunately, there is a solution that helps us in various ways: establish a multicast vxlan overlay on top of the Cloud Network.

Since vxlan's function by encapsulating the MAC-based layer 2 ethernet frames within layer 4 UDP packets, and because we can create one to operate on multicast mode, we can accomplish a couple of key steps in terms of reaching our proposed network architecture:

Luckily, there are a few network fabric projects set to resolve this exact issue in the Docker ecosystem that we could use. The top options are: Zettio's Weave and CoreOS' Flannel.

For our network architecture we chose Flannel.

Below are the steps taken to create the proposed network architecture using Flannel. They configure the networking via cloud-config & systemd units:

Note: If you have RackConnect v2 enabled you will require rules like the ones below. If you don't know what RackConnect is, you may safely ignore this.

Note: If you have RackConnect v3 enabled you must use the corekube-heat-rackconnectv3.yaml file and specify you RackConnect network id by setting the parameter rackconnect-network

Overlord

As you may have noticed in the "Cluster Discovery" figure above, there is an additional CoreOS server in addendum to the Kubernetes machines: the Overlord.

The Overlord is a custom Go package that operates in a Docker container.

After it joins the private discovery service, it is tasked with the following responsibilities in a daemon-like mode:

The Overlord's tasks are best described in the following figure with a daemon-like looping manner:

To view the Overlord's progress and status, log into the the "overlord" server and examine the Docker container, it operates: "setup_kubernetes"

$ ssh root@<overlord_ip>

Note: Building the setup_kubernetes container and running it can take several minutes, so refresh the following commands below until its output resembles yours.

Review the Docker image pulled:

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
overlord            master              a3a95c2e3b1c        6 hours ago         604.9 MB
google/golang       1.4                 e0d9d5bb3d3d        5 days ago          559.6 MB

Review all Docker processes:

$ docker ps -a

CONTAINER ID        IMAGE                     COMMAND                CREATED
14678dc12d55        overlord:latest           /gopath/bin/overlord   6 hours ago

Review the logs of the overlord's container:

$ docker logs 14678dc12d55

2015/04/29 22:35:25 ------------------------------------------------------------
2015/04/29 22:35:25 Current # of machines seen/deployed to: (0)
2015/04/29 22:35:25 ------------------------------------------------------------
2015/04/29 22:35:25 Current # of machines discovered: (4)
2015/04/29 22:35:25 ------------------------------------------------------------
2015/04/29 22:35:25 Found machine:
2015/04/29 22:35:25 -- ID: 15af742f87f94806979e82a474b41e91
2015/04/29 22:35:25 -- IP: 10.208.4.90
2015/04/29 22:35:25 -- Metadata: (kubernetes_role => master)
2015/04/29 22:35:25 Created all unit files for: 15af742f87f94806979e82a474b41e91
2015/04/29 22:35:25 Starting unit file: master-download-kubernetes@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:27 -- Waiting for the following unit file to complete: master-download-kubernetes@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:28 -- Waiting for the following unit file to complete: master-download-kubernetes@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:29 -- Waiting for the following unit file to complete: master-download-kubernetes@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:42 The following unit file has completed: master-download-kubernetes@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:42 Starting unit file: master-apiserver@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:44 -- Waiting for the following unit file to complete: master-apiserver@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:45 The following unit file has completed: master-apiserver@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:45 Starting unit file: master-controller-manager@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:47 -- Waiting for the following unit file to complete: master-controller-manager@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:48 The following unit file has completed: master-controller-manager@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:48 Starting unit file: master-scheduler@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:50 -- Waiting for the following unit file to complete: master-scheduler@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:51 The following unit file has completed: master-scheduler@15af742f87f94806979e82a474b41e91.service
2015/04/29 22:35:53 ------------------------------------------------------------
2015/04/29 22:35:53 Current # of machines seen/deployed to: (1)
2015/04/29 22:35:53 ------------------------------------------------------------
2015/04/29 22:35:53 Current # of machines discovered: (4)
2015/04/29 22:35:53 ------------------------------------------------------------
2015/04/29 22:35:53 Found machine:
2015/04/29 22:35:53 -- ID: 982129fef26b4790ba64b405b2602c14
2015/04/29 22:35:53 -- IP: 10.208.4.104
2015/04/29 22:35:53 -- Metadata: (kubernetes_role => minion)
2015/04/29 22:35:53 Created all unit files for: 982129fef26b4790ba64b405b2602c14
2015/04/29 22:35:53 Starting unit file: minion-download-kubernetes@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:35:55 -- Waiting for the following unit file to complete: minion-download-kubernetes@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:00 -- Waiting for the following unit file to complete: minion-download-kubernetes@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:01 The following unit file has completed: minion-download-kubernetes@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:01 Starting unit file: minion-kubelet@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:03 -- Waiting for the following unit file to complete: minion-kubelet@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:04 The following unit file has completed: minion-kubelet@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:04 Starting unit file: minion-proxy@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:06 -- Waiting for the following unit file to complete: minion-proxy@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:07 The following unit file has completed: minion-proxy@982129fef26b4790ba64b405b2602c14.service
2015/04/29 22:36:07 Registered node with the Kubernetes master: 10.208.4.104
2015/04/29 22:36:08 ------------------------------------------------------------
2015/04/29 22:36:08 Found machine:
2015/04/29 22:36:08 -- ID: 77e4d44ab2204fb0892aa7beccdff88f
2015/04/29 22:36:08 -- IP: 10.208.4.92
2015/04/29 22:36:08 -- Metadata: (kubernetes_role => minion)
2015/04/29 22:36:08 Created all unit files for: 77e4d44ab2204fb0892aa7beccdff88f
2015/04/29 22:36:08 Starting unit file: minion-download-kubernetes@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:10 -- Waiting for the following unit file to complete: minion-download-kubernetes@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:11 -- Waiting for the following unit file to complete: minion-download-kubernetes@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:12 The following unit file has completed: minion-download-kubernetes@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:12 Starting unit file: minion-kubelet@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:14 -- Waiting for the following unit file to complete: minion-kubelet@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:15 The following unit file has completed: minion-kubelet@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:15 Starting unit file: minion-proxy@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:17 -- Waiting for the following unit file to complete: minion-proxy@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:18 The following unit file has completed: minion-proxy@77e4d44ab2204fb0892aa7beccdff88f.service
2015/04/29 22:36:18 Registered node with the Kubernetes master: 10.208.4.92
2015/04/29 22:36:19 ------------------------------------------------------------
2015/04/29 22:36:19 Found machine:
2015/04/29 22:36:19 -- ID: 29b66575a8da412c8236af2716e55382
2015/04/29 22:36:19 -- IP: 10.208.4.116
2015/04/29 22:36:19 -- Metadata: (kubernetes_role => minion)
2015/04/29 22:36:19 Created all unit files for: 29b66575a8da412c8236af2716e55382
2015/04/29 22:36:19 Starting unit file: minion-download-kubernetes@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:21 -- Waiting for the following unit file to complete: minion-download-kubernetes@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:22 -- Waiting for the following unit file to complete: minion-download-kubernetes@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:23 The following unit file has completed: minion-download-kubernetes@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:23 Starting unit file: minion-kubelet@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:25 -- Waiting for the following unit file to complete: minion-kubelet@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:26 -- Waiting for the following unit file to complete: minion-kubelet@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:27 The following unit file has completed: minion-kubelet@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:27 Starting unit file: minion-proxy@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:29 -- Waiting for the following unit file to complete: minion-proxy@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:30 -- Waiting for the following unit file to complete: minion-proxy@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:31 The following unit file has completed: minion-proxy@29b66575a8da412c8236af2716e55382.service
2015/04/29 22:36:31 Registered node with the Kubernetes master: 10.208.4.116
2015/04/29 22:36:33 ------------------------------------------------------------
2015/04/29 22:36:33 Current # of machines seen/deployed to: (4)
2015/04/29 22:36:33 ------------------------------------------------------------
2015/04/29 22:36:33 Current # of machines discovered: (4)
2015/04/29 22:36:34 ------------------------------------------------------------
2015/04/29 22:36:34 Current # of machines seen/deployed to: (4)
2015/04/29 22:36:34 ------------------------------------------------------------

Kubernetes Usage

Once the Heat template finishes instantiating the Heat template, the resources are booted & initiated, and we've verified that the Overlord's setup_kubernetes container ran & exited successfully, we can begin using the examples available that showcase Kubernetes capabilities.

Follow this set of steps to get you started:

Note:

SkyDNS

SkyDNS has been integrated into Corekube and is automatically available & accessible within the Kubernetes cluster. To see how it was configured, view the systemd unit file used to deploy it and for information on how to use it, please check out the Kubernetes docs