In this tutorial, I'll walk through how you can expose a Service of type LoadBalancer in Kubernetes, and then get a public, routeable IP for any service on your local or dev cluster through the new inlets-operator.

The inlets-operator is a Kubernetes controller that automates a network tunnelling tool I released at the beginning of the year named inlets. Inlets can create a tunnel from a computer behind NAT/firewall/private networks to one on another network such as the internet. Think of it like "Ngrok, but Open Source and without limits"

onceptual diagram
Conceptual diagram of inlets, for the use-case of enabling webhooks from GitHub to a local service

For comparisons to other tools such as Ngrok, MetalLB and for more about the use-cases for incoming network-connectivity, feel free to checkout the GitHub repo and leave a ⭐️ inlets-operator.

Update: Feb 2020

Since this tutorial was published the inlets project has gained over 500 Twitter followers, 5.5k GitHub stars, a SWAG Store and its own documentation site!

Along with that a new PRO version of inlets (inlets-pro) has been released which adds support for:

  • TCP (instead of just HTTP, or manually configured HTTPS)
  • Automatic encryption via TLS and a HTTPS websocket
  • inlets-operator support

With inlets-pro you can now get an encrypted tunnel and even issue TLS certificates via LetsEncrypt for your favourite IngressController. Checkout the new tutorial: Expose Your IngressController and get TLS from LetsEncrypt

Tutorial

First we'll create a local cluster using K3d or KinD, then create a Deployment for Nginx, expose it as a LoadBalancer, and then access it from the Internet.

Pre-reqs

  • DigitalOcean.com or Packet.com account in which the operator will create hosts with public IPs

  • kubectl access to a local cluster created with KinD, Minikube, Docker Desktop, k3d, or whatever your preference is.

Option A - Install your local cluster with k3d

k3d installs Rancher's light-weight k3s distribution and runs it in a Docker container. The advantage over KinD is that it's faster, smaller, and keeps state between reboots.

Note: You'll also need Docker installed to use k3d.

  • Create a cluster
k3d create --server-arg "--no-deploy=traefik"

INFO[0000] Created cluster network with ID 8babe89daae477b2eb14e08754194865a559c6def84b8c78b0055e21d977b430 
INFO[0000] Created docker volume  k3d-k3s-default-images 
INFO[0000] Creating cluster [k3s-default]               
INFO[0000] Creating server using docker.io/rancher/k3s:v0.9.1... 
INFO[0000] Pulling image docker.io/rancher/k3s:v0.9.1... 
INFO[0007] SUCCESS: created cluster [k3s-default]       
INFO[0007] You can now use the cluster with:

Before going any further, switch into the context of the new Kubernetes cluster:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

Note: these instructions were tested with v1.15.4

Option B - Install your local cluster with KinD

KinD has gained popularity amongst the Kubernetes community since it was featured at KubeCon last year.

Note: You'll also need Docker installed to use KinD.

  • Create a cluster
 kind create cluster
Creating cluster "kind" ...
⠊⠁ Ensuring node image (kindest/node:v1.15.3) 🖼 

Before going any further, switch into the context of the new Kubernetes cluster:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")

Create a Cloud Access Token

The operator currently works with the Packet and DigitalOcean APIs to provision a host with a public IP.

Log into DigitalOcean.com, then click "API".

Screenshot-2019-10-04-at-14.19.38

Click Generate New Token

Copy the value from the UI and run the following to store the key as a file, such as $HOME/Downloads/do-access-token

Deploy the inlets-operator into your cluster

You can install the operator using Helm 2 or 3, but there's an easier way using the arkade tool. It still uses the Helm chart, but automates everything and provides helpful CLI flags.

Install arkade, and yes you can use this with any Kubernetes cluster, even k3s or a Raspberry Pi.

curl -sSL https://dl.get-arkade.dev | sudo sh

Note: if you're a Windows user, you can use Git Bash

Now find the various options for the operator app:

arkade install inlets-operator --help

Install the operator and specify the path for the DigitalOcean access token:

arkade install inlets-operator \
 --provider digitalocean \
 --region lon1 \
 --token-file $HOME/Downloads/do-access-token

If you've got a license for inlets-pro, then you can pass an additional argument and get support for TCP services in addition to HTTP, and also get end-to-end encryption built-in.

arkade install inlets-operator \
 --provider digitalocean \
 --region lon1 \
 --token-file $HOME/Downloads/do-access-token \
 --license $(cat $HOME/Downloads/inlets-pro-license.txt)

Create a test deployment

If you're an OpenFaaS user, then you could deploy the OpenFaaS gateway now, but let's try Nginx for simplicity:

kubectl run nginx-1 --image=nginx --port=80 --restart=Always

You'll see the deployment, but the classic problem that we cannot access it from the internet.

Expose Nginx as a LoadBalancer

Now if you were using a cloud platform such as AWS EKS, GKE or DigitalOcean Kubernetes, you'd have an IP address assigned by their platform. We're using a local KinD cluster so that simply wouldn't work.

Fortunately inlets solves this problem.

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

Now we see the familiar "Pending" status, but since we've installed the inlets-operator, a VM will be created on DigitalOcean and a tunnel will be established.

kubectl get svc -w

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        2m25s
nginx-1      LoadBalancer   10.104.90.5   <pending>     80:32037/TCP   1s

Keep an eye on the "External IP" field for your IP.

Access your local cluster service from the Internet

Using the IP in "EXTERNAL-IP" you can now access Nginx:

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        4m34s
nginx-1      LoadBalancer   10.104.90.5   <pending>     80:32037/TCP   2m10s
nginx-1      LoadBalancer   10.104.90.5   206.189.117.254   80:32037/TCP   2m36s

Here you can see the VM that was provisioned:

Screenshot-2019-10-04-at-14.28.05

Now access the site with the IP.

Screenshot-2019-10-04-at-14.43.09

The exit-nodes created by the inlets-operator on DigitalOcean cost around 5 USD per month by using the cheapest VPS available with 512MB RAM available. There may be cheaper options available.

Other cloud providers are available such as Packet, DigitalOcean, Scaleway, AWS EC2, GCP, and Civo.

Management and the CRD

The operator also comes with a CRD or Custom Resource Definition, run the following to view the output:

kubectl get tunnel -o wide
NAME             SERVICE   HOSTSTATUS   HOSTIP          HOSTID      TUNNEL
nginx-1-tunnel   nginx-1   active       178.62.96.156   181822079   nginx-1-tunnel-client

Every LoadBalancer service will receive an IP, unless you apply an annotation to override it, for instance:

kubectl annotate svc/traefik -n kube-system dev.inlets.manage=false

If you'd like to delete your exit-server, then you can do that by logging into your DigitalOcean dashboard, or by removing the service that was exposed for you:

kubectl delete service/nginx-1

The operator will manage the lifecycle of the VMs / cloud hosts on your behalf:

kubectl get tunnel
No resources found in default namespace.

kubectl logs deploy/inlets-operator

2020/02/23 14:35:52 Deleting exit-node for nginx-1: 181822079, ip: 178.62.96.156

Video demo

Short on time? Checkout my video demo and walk-through:

Get a LoadBalancer for your RPi cluster

You can install Kubernetes with k3s using my tutorial

Good news! Since the original tutorial, the instructions for running on k3s, k3d, minikube, kubeadm, and Raspberry Pi are now all exactly the same.

Wrapping up

By using inlets and the new inlets-operator, we can now get a public IP for Kubernetes services behind NAT, firewalls, and private networks.

f you completed the tutorial, let us know with a Tweet to @inletsdev.

At 5 USD per month, your private LoadBalancer is a fraction of the cost of a cloud Load Balancer which come in at 15 USD+ per month. I believe that the cost comparison is irrelevant because it's currently impossible to get a cloud load balancer from AWS or Google Cloud for your local KinD cluster. inlets-operator changes the situation.

Need help? Join #inlets on OpenFaaS Slack

See also: