A couple of the finer points of this configuration have changed since it was originally written. For example aspects of the homebrew installation of tuntap have changed. I updated the post and did a sanity run through - updating a few of the steps for better clarity. Since I did this, I've also bumped the date of this post to be current as well.
Introduction
Minikube and Docker for Desktop generally provide an "OK" experience for testing Kubernetes based things locally - but I really like the ability work against multiple nodes for some cases (i.e. I was doing some experimentation with daemonsets recently). Enter KIND! KinD (or Kubernetes-in-Docker) is one of the Kubernetes SIG projects (Special Interest Groups), and represents a tool for deploying Kubernetes clusters inside of Docker. Super useful for doing Kubernetes things locally!
A ways back, I had discovered MetalLB as a method for getting an easy load balancer on-premises for Kubernetes. In the public cloud world - getting services into a cluster and subsequent load balancer connectivity is pretty easy. It gets a bit more nebulous (or, expensive...) on-premises.
The problem I found however is with how MacOS handles Docker. Since MacOS leverages [Hyperkit](http://collabnix.com/how-docker-for-mac-works-under-the-hood/ for virtualization, the interfaces for Dockers bridge network aren't actually routable interfaces - you're actually connecting to a socket instead. Immediately upon starting to research, I found that the Great and Powerful Duffie Cooley had done a blog on just this topic, but from the Linux point of view. In the Linux world, the docker0 bridge network is directly connected - allowing you to interact from a network perspective seamlessly.
Fortunately, I wasn't the only one looking at how to do this, and someone else far smarter solved it!
The Solution
Ultimately what was needed was a way to hit the docker0 bridge network. Hyperkit supports this functionality through a specific set of additional arguments used during the creation of the machine. This isn't possible out of the box since it's actually Docker that's creating the machine, and the commands are hard-coded in that way. While digging - I discovered a GitHub project that was working on this specific use case for Docker - docker-tuntap-osx.
This shim install allows a bridge network to be created between the host and guest machine. Subsequently, a gateway address is created that can then be used to route against to hit cluster services inside the docker networks.
There are caveats however...
It's hacky and unsupported, and you should use kubectl proxy or port forward if at all possible
Every time your machine restarts you'll need to reapply the shim and restart docker
I experienced having to remove the static route and re-add after periods of non-use. The route would be there but it suddenly wouldn't work.
Let's dive in!
Getting Started
All and all, this is a pretty quick thing to pull off. In order to knock this out, we're going to do the following
As mentioned in the instructions within that GitHub, use brew to install tuntap (brew tap homebrew/cask followed by brew cask install tuntap). You may need a restart after this - but I didn't on my system
Exit out of Docker for Mac
Once these 2 things are complete, we can execute the shell script, ./sbin/docker_tap_install.sh. It's important to NOT execute this command with sudo. If you execute it with sudo, the interface will be created under the root user, and the functionality will not work.
Once the tap is installed, we will bring the interface up
We can assign a static route against the gateway on that interface to provide routing into the our KIND environment, and ultimately MetalLB.
Finally - we'll install/configure MetalLB into our Kubernetes cluster
As usual, you should always be wary about executing arbitrary scripts. I'd highly recommend reviewing the script to ensure you're comfortable with what it's doing.
Execute the ./sbin/docker_tap_install.sh script
./sbin/docker_tap_install.sh
Installation complete
Restarting Docker
Process restarting, ready to go
Once Docker finishes restarting, you can grep your interfaces looking for tap to see that the tap interface has been created.
ifconfig | grep "tap"
tap1: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
With this in place, we can run the script that will bring our TAP interface "up" in order to set our connectivity to the docker network up. Note that there are a few things in this file that you might want or need to change. The script currently uses the default Docker network, but in some cases this might change. I've provided the output of my docker_tap_up.sh script below for comparison:
⯠cat sbin/docker_tap_up.sh
#!/bin/bash
set -o nounset
set -o errexit
# Local and host tap interfaces
localTapInterface=tap1
hostTapInterface=eth1
# Local and host gateway addresses
localGateway='10.0.75.1/30'
hostGateway='10.0.75.2'
hostNetmask='255.255.255.252'
# Startup local and host tuntap interfaces
sudo ifconfig $localTapInterface $localGateway up
docker run --rm --privileged --net=host --pid=host alpine ifconfig $hostTapInterface $hostGateway netmask $hostNetmask up
You may want to (based on your environment...) update the localGateway and/or hostGateway settings, but they should work as default.
When satisfied, execute ./sbin/docker_tap_up.sh and when it completes run an ifconfig. If we scroll to the last interface, it should be tap1, and you should see the network assigned
tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 12:68:9b:00:c2:22
inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
media: autoselect
status: active
open (pid 11096)
With that portion configured, we're going to break from our networking journey for a brief moment to get our KinD cluster up and running.
Deploying our Cluster with KIND
Eric Shanks dropped a blog post a ways back around A Kind Way to Learn Kubernetes. It's a great read on the in's and out's of getting KIND up and running and. Knowing that that's there to read - I'm going to be pretty brief in how to get our cluster up and running.
If all goes well, you should see results similar to below...
⯠kind create cluster --config config.yaml
Creating cluster "kind" ...
â Ensuring node image (kindest/node:v1.18.2) đŧ
â Preparing nodes đĻ đĻ đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
KinD will automatically add it's kubeconfig information to your existing contexts and you should be off to the races. You can validate this, as well as the network your nodes are running on (this will be important in a bit...) by running a kubectl get nodes -o wide, which should give you an output similar to below:
As you can see, our nodes deployed onto a 172.18.x.x network. To use the gateway on the tap interface we created earlier, we'll add a static route into this network. This will allow us to (soon) route to our MetalLB load balancers. You'll want to validate the network KinD deployed the nodes onto in your environment. This can be done by using the docker network ls and docker network inspect commands to check your network. Kind creates a Docker network aptly called "kind", so the command you would run is docker network inspect kind, and look for the "Subnet" entry. In my environment, at the time of writing, it's 172.18.0.0/16 for example.
Using this information, we can create our static route with the command below,
This should ultimately create a number of resources within your cluster, you can run a get pods against the metallb-system namespace (kubectl get pods -n metallb-system) to see the resulting created created pods.
With these resources created, we'll now need to setup the actual configuration by deploying a configmap. In MetalLB, we can either deploy our Load Balancing configuration in Layer 2 mode or using BGP. Since we're doing this all locally, it doesn't really make sense for us to peer into BGP. We'll rock us some L2.
Earlier when we defined out our static route, you'll notice I used the 172.18.0.0 network as the destination for our traffic. We're going to tell MetalLB that it can also deploy load balancers onto this network. We'll use some higher IP Addresses to hopefully avoid any sort of collisions.
I've got a really silly application I threw together when I was at VMware still that has grown a bit since I joined HashiCorp. It's mostly setup to test Service Mesh functionality now, but it also deploys a resource on the frontend that uses a load balancer. We can use this to give things a test.
After a few moments, you should be able to run a kubectl get svc -n custom-application which will list all exposed services in the cluster. If all things went well, you should see the a deployed load balancer!
⯠kubectl get svc -n custom-application
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.110.171.147 172.18.0.151 80:30041/TCP 45s
Observe our frontend service behind a load balancer. Finally, if we hit it in a browser, we should have our page return!
Wrapping Up
Using KinD + MetalLB gives you a quick way to get clusters up and running and be able to deploy functional load balancing into your cluster. I use this functionality pretty much every day. Take it for a spin and let me know what you think!
title: "KIND and Load Balancing with MetalLB on Mac" date: 2020-12-04 slug: kind-and-metallb-on-mac tags: kind,kubernetes,docker,hackery description: KIND and Load Balancing on MacOS cover_image: https://user-images.githubusercontent.com/17350652/160060723-86e0e113-5661-4d1b-96f5-67c71cc07ac9.png
Updated 12/4/2020
A couple of the finer points of this configuration have changed since it was originally written. For example aspects of the homebrew installation of tuntap have changed. I updated the post and did a sanity run through - updating a few of the steps for better clarity. Since I did this, I've also bumped the date of this post to be current as well.
Introduction
Minikube and Docker for Desktop generally provide an "OK" experience for testing Kubernetes based things locally - but I really like the ability work against multiple nodes for some cases (i.e. I was doing some experimentation with daemonsets recently). Enter KIND! KinD (or Kubernetes-in-Docker) is one of the Kubernetes SIG projects (Special Interest Groups), and represents a tool for deploying Kubernetes clusters inside of Docker. Super useful for doing Kubernetes things locally!
A ways back, I had discovered MetalLB as a method for getting an easy load balancer on-premises for Kubernetes. In the public cloud world - getting services into a cluster and subsequent load balancer connectivity is pretty easy. It gets a bit more nebulous (or, expensive...) on-premises.
The problem I found however is with how MacOS handles Docker. Since MacOS leverages [Hyperkit](http://collabnix.com/how-docker-for-mac-works-under-the-hood/ for virtualization, the interfaces for Dockers bridge network aren't actually routable interfaces - you're actually connecting to a socket instead. Immediately upon starting to research, I found that the Great and Powerful Duffie Cooley had done a blog on just this topic, but from the Linux point of view. In the Linux world, the docker0 bridge network is directly connected - allowing you to interact from a network perspective seamlessly.
Fortunately, I wasn't the only one looking at how to do this, and someone else far smarter solved it!
The Solution
Ultimately what was needed was a way to hit the docker0 bridge network. Hyperkit supports this functionality through a specific set of additional arguments used during the creation of the machine. This isn't possible out of the box since it's actually Docker that's creating the machine, and the commands are hard-coded in that way. While digging - I discovered a GitHub project that was working on this specific use case for Docker - docker-tuntap-osx.
This shim install allows a bridge network to be created between the host and guest machine. Subsequently, a gateway address is created that can then be used to route against to hit cluster services inside the docker networks.
There are caveats however...
Let's dive in!
Getting Started
All and all, this is a pretty quick thing to pull off. In order to knock this out, we're going to do the following
brew tap homebrew/cask
followed bybrew cask install tuntap
). You may need a restart after this - but I didn't on my systemAs usual, you should always be wary about executing arbitrary scripts. I'd highly recommend reviewing the script to ensure you're comfortable with what it's doing.
Execute the ./sbin/docker_tap_install.sh script
Once Docker finishes restarting, you can grep your interfaces looking for tap to see that the tap interface has been created.
With this in place, we can run the script that will bring our TAP interface "up" in order to set our connectivity to the docker network up. Note that there are a few things in this file that you might want or need to change. The script currently uses the default Docker network, but in some cases this might change. I've provided the output of my
docker_tap_up.sh
script below for comparison:You may want to (based on your environment...) update the localGateway and/or hostGateway settings, but they should work as default.
When satisfied, execute
./sbin/docker_tap_up.sh
and when it completes run anifconfig
. If we scroll to the last interface, it should be tap1, and you should see the network assignedWith that portion configured, we're going to break from our networking journey for a brief moment to get our KinD cluster up and running.
Deploying our Cluster with KIND
Eric Shanks dropped a blog post a ways back around A Kind Way to Learn Kubernetes. It's a great read on the in's and out's of getting KIND up and running and. Knowing that that's there to read - I'm going to be pretty brief in how to get our cluster up and running.
If all goes well, you should see results similar to below...
KinD will automatically add it's kubeconfig information to your existing contexts and you should be off to the races. You can validate this, as well as the network your nodes are running on (this will be important in a bit...) by running a
kubectl get nodes -o wide
, which should give you an output similar to below:As you can see, our nodes deployed onto a 172.18.x.x network. To use the gateway on the tap interface we created earlier, we'll add a static route into this network. This will allow us to (soon) route to our MetalLB load balancers. You'll want to validate the network KinD deployed the nodes onto in your environment. This can be done by using the
docker network ls
anddocker network inspect
commands to check your network. Kind creates a Docker network aptly called "kind", so the command you would run isdocker network inspect kind
, and look for the "Subnet" entry. In my environment, at the time of writing, it's172.18.0.0/16
for example.Using this information, we can create our static route with the command below,
With this configured, we should be ready to setup our cluster and MetalLB!
Configuring MetalLB
MetalLB has a great set of documentation for getting started.
We'll simply execute the following command to deploy out the necessary manifests for MetalLB
This should ultimately create a number of resources within your cluster, you can run a get pods against the metallb-system namespace (kubectl get pods -n metallb-system) to see the resulting created created pods.
With these resources created, we'll now need to setup the actual configuration by deploying a configmap. In MetalLB, we can either deploy our Load Balancing configuration in Layer 2 mode or using BGP. Since we're doing this all locally, it doesn't really make sense for us to peer into BGP. We'll rock us some L2.
Earlier when we defined out our static route, you'll notice I used the 172.18.0.0 network as the destination for our traffic. We're going to tell MetalLB that it can also deploy load balancers onto this network. We'll use some higher IP Addresses to hopefully avoid any sort of collisions.
Create and apply the following configmap:
And our cluster should be ready!
Deploying an Application
I've got a really silly application I threw together when I was at VMware still that has grown a bit since I joined HashiCorp. It's mostly setup to test Service Mesh functionality now, but it also deploys a resource on the frontend that uses a load balancer. We can use this to give things a test.
After a few moments, you should be able to run a
kubectl get svc -n custom-application
which will list all exposed services in the cluster. If all things went well, you should see the a deployed load balancer!Observe our frontend service behind a load balancer. Finally, if we hit it in a browser, we should have our page return!
Wrapping Up
Using KinD + MetalLB gives you a quick way to get clusters up and running and be able to deploy functional load balancing into your cluster. I use this functionality pretty much every day. Take it for a spin and let me know what you think!