pegasus-io / the-cors-tester

A quick HTTP webserver with its webapp that requests cross origin resources
0 stars 0 forks source link

add service def for k8S deployment #3

Open Jean-Baptiste-Lasselle opened 4 years ago

Jean-Baptiste-Lasselle commented 4 years ago
Jean-Baptiste-Lasselle commented 4 years ago
Jean-Baptiste-Lasselle commented 4 years ago

Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.

  • service name website1, type ClusterIP
  • replicaset name rswebsite1, desired replicas 3 max 6 min 2
  • On the machine where I run kubectl, I add to /etc/hosts the line 127.0.0.1 website2.pokusio.io website1.pokusio.io pokusio.io , so that when I kubectl port-forward svc/website1 8001:3000 and kubectl port-forward svc/website2 8002:3000, in two parallel shell sessions,
  • I access the service running curl -iv http://website1.pokusio.io:8001, curl -I --head http://website2.pokusio.io:8002 or even open http://website1.pokusio.io:8001 in a web browser like Mozilla firefox

Alright, now if all this metalLB thing happens inside k3s, what are the difference ?

Ok, so now is the very important thing on how to do that :

jbl@poste-devops-jbl-16gbram:~/hypocrate$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 50:46:5d:b6:ce:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.34/24 brd 192.168.1.255 scope global dynamic enp3s0
       valid_lft 60993sec preferred_lft 60993sec
    inet6 2a01:cb04:49a:9500:b4cc:669b:89fb:e22c/64 scope global temporary dynamic 
       valid_lft 1771sec preferred_lft 571sec
    inet6 2a01:cb04:49a:9500:5246:5dff:feb6:ceaa/64 scope global mngtmpaddr noprefixroute dynamic 
       valid_lft 1771sec preferred_lft 571sec
    inet6 fe80::5246:5dff:feb6:ceaa/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:50:0b:d6:21 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
jbl@poste-devops-jbl-16gbram:~/hypocrate$ docker network create jblreseau --driver bridge
e3c6596d6ac244399804b426766e6b62ea1fa2f0f141cadc9fd81168d00fe89e
jbl@poste-devops-jbl-16gbram:~/hypocrate$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 50:46:5d:b6:ce:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.34/24 brd 192.168.1.255 scope global dynamic enp3s0
       valid_lft 60987sec preferred_lft 60987sec
    inet6 2a01:cb04:49a:9500:b4cc:669b:89fb:e22c/64 scope global temporary dynamic 
       valid_lft 1765sec preferred_lft 565sec
    inet6 2a01:cb04:49a:9500:5246:5dff:feb6:ceaa/64 scope global mngtmpaddr noprefixroute dynamic 
       valid_lft 1765sec preferred_lft 565sec
    inet6 fe80::5246:5dff:feb6:ceaa/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:50:0b:d6:21 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: br-e3c6596d6ac2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:71:69:74:bc brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-e3c6596d6ac2
       valid_lft forever preferred_lft forever
jbl@poste-devops-jbl-16gbram:~/hypocrate$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
f3dbd58c0ab5        bridge              bridge              local
96af6a0bb692        host                host                local
e3c6596d6ac2        jblreseau           bridge              local
714d2b71ba4f        none                null                local
jbl@poste-devops-jbl-16gbram:~/hypocrate$ 

docker bridge network in my VM as a standard NIC

And from that VM qVM :

pmachine, and the Docker network IP addresses, so containers or MetalLb managed IP Address range in k3s :

Fine, now, :

The Docker server creates and configures the host system’s docker0 interface as an Ethernet bridge inside the Linux kernel that could be used by the docker containers to communicate with each other and with the outside world

architecture pegasusio

How to connect all docker network bridges to one OpenVSwitch, and OpenVSwitch to VyOS Router neytwork interface : http://wiki.flav.com/wiki/Open_vSwitch_Tutorial

Bridge network interfaces, docker and host net

192.168.1.34 : une VM simple VirtualBox

In the following examples, we have a host with address 192.168.1.34 on the 192.168.1.0/24 network. We are creating a Docker container that we want to expose as 192.168.1.117/24. the Linux network interface that is holding the 192.168.1.34 ip address is the enp0s3 device

Start by creating a new bridge device. In this example, we'll create one called br-pokus :

sudo brctl addbr br-pokus
ip link set br-pokus up

# Look at the configuration of interface em1 and note the existing ip address (192.168.1.15/24) :
ip addr show enp3s0

# ----
# example output
# 
# 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
#     link/ether 50:46:5d:b6:ce:aa brd ff:ff:ff:ff:ff:ff
#     inet 192.168.1.34/24 brd 192.168.1.255 scope global dynamic enp3s0
#        valid_lft 57868sec preferred_lft 57868sec
#     inet6 2a01:cb04:49a:9500:b4cc:669b:89fb:e22c/64 scope global temporary dynamic 
#        valid_lft 1771sec preferred_lft 571sec
#     inet6 2a01:cb04:49a:9500:5246:5dff:feb6:ceaa/64 scope global mngtmpaddr noprefixroute dynamic 
#        valid_lft 1771sec preferred_lft 571sec
#     inet6 fe80::5246:5dff:feb6:ceaa/64 scope link 
#        valid_lft forever preferred_lft forever
# 

# Look at your current routes and note the default route device : 
ip route

# --- 
# note example output below the default route uses 
# device [enp3s0] and that [192.168.1.1] is my home router
# ----
# default via 192.168.1.1 dev enp3s0 proto static metric 100 
# 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
# 172.20.0.0/16 dev br-e3c6596d6ac2 proto kernel scope link src 172.20.0.1 linkdown 
# 192.168.1.0/24 dev enp3s0 proto kernel scope link src 192.168.1.15 metric 100 

# So we will add the [enp3s0] device to the network bridge we created : 
brctl addif br-pokus enp3s0

# Configure the [br-pokus] bridge with the address that used to belong to [enp3s0]:
ip addr del 192.168.1.34/24 dev enp3s0
ip addr add 192.168.1.34/24 dev br-pokus

# And finally redefine default ip route to go to [br-pokus] bridge instead of [enp3s0]
ip route del default
ip route add default via 192.168.1.1 dev br-pokus

# At this point, verify that you still have network connectivityfrom VM to outside internet
curl http://google.com/

# Now creating a docker container, to try and ping its ip address

# 1./ create the container
docker run -d --name jbloueb larsks/simpleweb
# This will give us the normal eth0 interface inside the container, but we're going to ignore that and add a new one.
# 
# 2./ Create a veth interface pair:
ip link add jblweb-int type veth peer name jblweb-ext
# 3./ Add the web-ext link to the br-pokus bridge : 
brctl addif br-pokus jblweb-ext
# 4./ And add the web-int interface to the namespace of the container:
ip link set netns $(docker-pid jbloueb) dev jblweb-int
# 5./ Next, we'll use the [nsenter] command (part of the [util-linux] package) to run some commands inside the web container. 
#     5.A/ Start by bringing up the link inside the container:
nsenter -t $(docker-pid jbloueb) -n ip link set jblweb-int up
#     5.B/ Assign our target ip address to the interface: 
nsenter -t $(docker-pid jbloueb) -n ip addr add 192.168.1.117/24 dev jblweb-int
#     5.C/ And set a new default route inside the container:
nsenter -t $(docker-pid jbloueb) -n ip route del default
nsenter -t $(docker-pid jbloueb) -n ip route add default via 192.168.1.1 dev jblweb-int

# after that, we can check we can access the container via : 
curl http://192.168.1.117/hello.html

# we can test that curl from the qVM of IP Address 192.168.1.34
# we can test that curl from the pmachine of IP Address 192.168.1.15 on which the VirtualBox VM is created with IP address 192.168.1.34

We'll see

Jean-Baptiste-Lasselle commented 4 years ago

MetalLb / k3s

https://blog.kubernauts.io/k3s-with-metallb-on-multipass-vms-ac2b37298589

K3S with MetalLB on Multipass VMs

Image for post
Image for post
k3s with MetalLB on Multipass VMs

Last update: May 22nd, 2020
The repo has been renamed to Bonsai :-)
https://github.com/arashkaffamanesh/bonsai

Image for post
Image for post
Punica granatum, Moyogi stile, about 50 years old, from the Bonsai museum in Pescia, Italy. Source: https://en.wikipedia.org/wiki/Bonsai,

k3s from Rancher Labs surpassed recently 10k stars on Github from the Kommunity during KubeCon in San Diego and was GA’ed through the 1.0 release announcement and I’m sure k3s will play a central role in the cloud-native world not only for edge use cases and will replace a large amount of k8s deployments in the data center and in the cloud, or at least it will cross Rancher’s own RKE implementation in popularity.

This post is about how to extend k3s’ load balancing functionality with MetalLB for load balancing on your local machine or later on on-prem environments, on the edge or on bare-metal clouds.

This post in NOT about k3s, if you’d like to learn about k3s under the hood, please enjoy Darren Shepherd’s talk about “K3s under the Hood” at KubeCon in San Diego:

And don’t miss this great post “Why K3s Is the Future of Kubernetes at the Edge” by Caroline Tarbett.

k3s cluster on your local machine

k3s comes with traefik ingress controller and a custom service load balancer implementation for load balancing on k3s launched k8s clusters for internal load balancing of your microservices.

You can use k3d or this k3s deployment guide on Ubuntu VMs launched with Canonical’s multipass on your machine to follow this guide.

k3d is the easiest way to get k3s running on your machine, it uses docker to launch a multi-node k3s cluster within a minute on your local machine very similar to KIND and other solutions out there.

In this first post, I’m going to introduce a k3s deployment on multipass VMs on Mac / Linux with MetalLB for Load Balancing and shed some light on Ingress Controllers and Ingress with and without load balancing capabilities. By the way, this guide should work on Windows with some headaches most probably too.

In the next post, I’ll introduce the MetalLB integration with a k3d launched k3s cluster on your machine.

About Multipass

With Canonical’s Multipass you can run Ubuntu VMs to build a mini vm cloud on your machine, somehow similar to Vagrant with Virtualbox.

Multipass comes with cloud-init support to launch instances of Ubuntu just like on AWS, Azure, Google, etc.. using Hyperkit, KVM, Virtualbox or Hyper-V on your machine.

About Services, Ingress Controller, Ingress Object and Load Balancer and LB Service

Before we go through the easy k3s installation, let’s talk about services, ingress controller, the ingress object and load balancing and understand how the MetalLB Load Balancer implementation combined with Traefik’s Ingress and load balancer implementation on k3s works on your local machine or on a real multi-node bare metal environment.

In Kubernetes a service which is defined with the type LoadBalancer acts as an ingress -an entry point- to your service in your cluster, but it needs a load balancer implementation, since Kubenretes doesn’t provide an external load balancer implementation for good reasons.

MetalLB is a software defined load balancer implementation for bare metal / edge environments. Let’s discuss why ingress and not only with a LoadBalancer to provide the ingress functionality to our services running in a k8s / k3s cluster?

An ingress provides a mean to get an entry point to the services within a cluster, in other words an ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster, a service itself provides a cluster IP and does act as an internal LoadBalancer.

An Ingress handles load balancing at Layer 7 and the Ingress Controller creates ingress objects which defines the routing rules for a particular application using a host, path or other rules to send traffic to service endpoints on to the Pods.

Ingress is a separate declaration that does not depend on the Service type and with Ingress multiple Services and backends could be managed with a single load balancer ( → enjoy this article about Load Balancing and Reverse Proxying for Kubernetes Services).

A service defined with the type LoadBalancer is exposed with an external IP to the outside of the cluster (in most cases to the internet), this IP is not bound to a physical interface and is usually handed out through a switch / router via DHCP.

An Ingress without a load balancer service in the front is a Single Point of Failure, that’s why usually an ingress is combined with a Load Balancer in the front to provide High Availability with intelligent capabilities through the Ingress Controller such as path based or (sub-) domain based routing with TLS termination and other capabilities defined through annotations in the ingress resource.

As already mentioned k3s comes with the Traefik Ingress Controller as the default Ingress Controller, which allows us to define an Ingress object for HTTP(S) traffic to be able to expose multiple services through a single IP.

By creating an ingress for a service, the ingress controller will create a single entry-point to the defined service in the ingress resource on every node in the cluster.

The ingress controller service itself on k3s is exposed with the type LoadBalancer by k3s traefik service load balancer implementation of k3s and it scans all ingress objects in the cluster and makes sure that the requested hostname or path is routed to the right service in our k8s cluster.

With that said, k3s provides out-of-the-box ingress and in-cluster load balancing through built-in k8s services capabilities, since a service in k8s is an internal load balancer as well.

Nice to know

A k8s service provides internal load balancing capabilities to the end-points of a service (containers of a pod).

An ingress provides an entry point to a service defined through the ingress definition to the out-side world of a cluster.

A service defined with the type load balancer is exposed with an external IP and doesn’t provide any logic or rules for routing the client requests to a service.

A load balancer implementation is up to us or the cloud service provider, on bare metal environments MetalLB is the (only?) software defined load balancer implementation which we can use.

A MetalLB implementation without an ingress controller support doesn’t make sense in most cases, since MetalLB doesn’t provide any support to route client requests based on (sub-) domain name or path or provide TLS termination on the LB side out of the box.

k3s deployment on Multipass VMs

You need Multipass installed on your machine, head to multipass.run, download the multipass package and install it:

$ wget https://github.com/canonical/multipass/releases/download/v1.0.0/multipass-1.0.0+mac-Darwin.pkg
$ sudo installer -target / -verbose -pkg multipass-1.0.0+mac-Darwin.pkg
# verify the version
$ multipass version
multipass 1.0.0+mac
multipassd 1.0.0+mac

If you’d like to follow this guide, please clone the repo and run only the deploy-bonsai.sh script:

$ git clone https://github.com/arashkaffamanesh/bonsai
$ ./deploy-bonsai.sh

Before you run the script, you might like to know what you’re running, the 8-deploy-only-k3s.sh script includes 2 scripts:

1-deploy-multipass-vms.sh
and
2-deploy-k3s-with-portainer.sh

The first included script 1-deploy-multipass-vms.sh launches 4 nodes, node{1..4} and writes the hosts entries in your /etc/hosts file and therefore you need to provide your sudo password. Your /etc/hosts is backup’ed and the hosts entries will be copied in the hosts file of your multipass VMs as well.

The second included script ./2-deploy-k3s-with-portainer.sh deploys the k3s master on node1 and the worker nodes on node{2..4} and copies the k3s.yaml (kube config) on your machine, taints the master node to not be schedulable and labels the worker nodes with the node role, deploys portainer and finally prints the nodes and brings up Portainer in a new tab in your browser:

$ export KUBECONFIG=k3s.yaml
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node4 Ready node 144m v1.16.3-k3s.2
node2 Ready node 149m v1.16.3-k3s.2
node3 Ready node 146m v1.16.3-k3s.2
node1 Ready master 153m v1.16.3-k3s.2
$ multipass lsName State IPv4 Image
node1 Running 192.168.64.19 Ubuntu 18.04 LTS
node2 Running 192.168.64.20 Ubuntu 18.04 LTS
node3 Running 192.168.64.21 Ubuntu 18.04 LTS
node4 Running 192.168.64.22 Ubuntu 18.04 LTS

The whole installation should take about 4 minutes, depending on your internet speed, at the end we’ll get something like this:

Jean-Baptiste-Lasselle commented 4 years ago

Metallb

Bon, ok je sais ce que je vais faire pour réessayer le metallb dans k3s:

La configuration appliquée par l'article référencé ci-dessus :

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.64.23-192.168.64.200
Jean-Baptiste-Lasselle commented 4 years ago

sinon, essayer inlets, mais avec des VM en réseau local si possible

Jean-Baptiste-Lasselle commented 4 years ago
Jean-Baptiste-Lasselle commented 4 years ago

One good article

Loadbalacing

This article is an introduction to MetalLB, a solution to expose services if the Kubernetes cluster is not deployed on a Cloud platform.

Intended audience: Kubernetes administrators.

By Abdellah Seddik TAHARDJEBBAR, Cloud Consultant @ Objectif Libre

Introduction

Kubernetes is the new hot topic in the IT world today. While widly adopted, some problems are still hard to solve, including how to expose services outside the cluster. If your Kubernetes cluster is deployed on Cloud platforms such as OpenStack, AWS or GCP, the cluster can deploy Loadbalancers exposed by the Cloud platform. But not all Kubernetes clusters are deployed on Cloud platforms. Kubernetes can also be deployed on bare metal servers. In this case, LoadBalancers will remain in the “pending” state indefinitely when created. Bare metal cluster operators are left with two tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both options have significant downsides for production use. A new solution called MetalLB has been introduced to help bare metal cluster operators.

MetalLB

MetalLB is a Loadbalancer implementation for bare metal Kubernetes clusters, based on standard routing protocols.

Concepts

In a Cloud environment, the creation of the Loadbalancer and the allocation of the external IP address is done by the Cloud platform. In a bare metal cluster, MetalLB is responsible for that allocation. For this a network address pool must be reserved for MetalLB. Once MetalLB has assigned an external IP address to a service, it needs to redirect the traffic from the external IP to the cluster. To do so, MetalLB uses standard protocols such as ARP, NDP, or BGP.

Layer 2 mode (ARP/NDP)

In this mode a service is owned by one node in the cluster. It is implemented by announcing that the layer 2 address (MAC address) that matches to the external IP is the MAC address of the node. For external devices the node have multiple IP address.

Architecture

With the layer 2 mode MetalLB runs two components:

  • Cluster-wide controller: this component is responsible for receiving allocation requests.
  • Speaker: the speaker must be installed on each node in the cluster, it advertises the layer 2 address.

Demo

First, install MetalLB using the provided manifest:

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

Now new pods were created, a controller and three speaker:

$ kubectl get pod -n metallb-system -o wide
controller-7cc9c87cfb-dqz6z 1/1 Running 0 145m 10.233.70.3     node5 <none> <none>
speaker-2pl5m               1/1 Running 0 145m 192.168.121.170 node3 <none> <none>
speaker-5ndrq               1/1 Running 0 145m 192.168.121.224 node4 <none> <none>
speaker-rln5v               1/1 Running 0 145m 192.168.121.72  node5 <none> <none> 

The next step is to configure MetaLB using a ConfigMap. We set the operation mode (Layer 2 or BGP) and the external IP address range:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
      - name: my-ip-space
        protocol: layer2
        addresses:
          - 192.168.143.230-192.168.143.250

In this configuration we tell MetalLB to hand out addresses from the 192.168.143.230-192.168.143.250 range, using layer 2 mode (protocol: layer2).

To test our Loadbalancer, we need to create a Loadbalancer service type:

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/tutorial-2.yaml

Now we can see that a new Loadbalancer service was created and MetalLB successfully assigned an external IP address to it from the pool that we specified in the configuration:

$  kubectl get svc nginx
NAME     TYPE         CLUSTER-IP   EXTERNAL-IP     PORT(S)      AGE
nginx    LoadBalancer 10.233.30.62 192.168.143.230 80:31937/TCP 6h26m

Now if we tried to access to the service the client sends an ARP request to find the MAC address of the external IP address. One of the speakers responds with the MAC address of its node.

$kubectl logs -l component=speaker -n metallb-system --since=1m
{"caller":"arp.go:102","interface":"eth2","ip":"192.168.143.230","msg":"got ARP request for service IP, sending response","responseMAC":"52:54:00:a8:63:c5","senderIP":"192.168.143.1","senderMAC":"52:54:00:bd:4a:3e","ts":"2019-04-25T14:21:58.369396026Z"}

{"caller":"arp.go:102","interface":"eth2","ip":"192.168.143.230","msg":"got ARP request for service IP, sending response","responseMAC":"52:54:00:a8:63:c5","senderIP":"192.168.143.1","senderMAC":"52:54:00:bd:4a:3e","ts":"2019-04-25T14:22:29.145677Z"}

Using the Layer two mode to create a Loadbalancer is very simple but it is also limited because a service can be accessed from one and only one node. In a production environment it is best to use the BGP mode.

BGP

With the BGP mode the speakers establish a BGP peering with routers outside of the cluster, and tell those routers how to forward traffic to the service IPs. Using BGP allows for true load balancing across multiple nodes, and fine-grained traffic control thanks to BGP’s policy mechanisms.

Note :

In this demo we will not describe the router configuration. We assume the the router accepts all BGP connections coming from the speakers.

The following architecture is used in this demo:

Just like with the first mode, install MetalLB using the provided manifest:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

And to configure MetalLB with a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    peers:
    - my-asn: 64500
      peer-asn: 64500
      peer-address: 192.168.121.10
    address-pools:
    - name: my-ip-space
      protocol: bgp
      addresses:
      - 192.168.143.230-192.168.143.250

In addition to the external IP pool we need to define the AS number that will be used by speakers and the IP address of the remote peers with their AS numbers.

We can see in the router logs that a new peer has been added to the neighbors table.

R1#show ip bgp summary 
BGP router identifier 192.168.143.2, local AS number 64500
BGP table version is 23, main routing table version 23
1 network entries using 144 bytes of memory
1 path entries using 80 bytes of memory
1/1 BGP path/bestpath attribute entries using 136 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 360 total bytes of memory
BGP activity 5/4 prefixes, 13/12 paths, scan interval 60 secs

Neighbor        V          AS    MsgRcvd MsgSent   TblVer InQ OutQ  Up/Down  State/PfxRcd
192.168.121.72  4        64500       2       4       23    0    0   00:00:24        0
192.168.121.170 4        64500       2       5       23    0    0   00:00:24        0
192.168.121.224 4        64500       2       4       23    0    0   00:00:24        0

The next step is to create a service and let MetalLB do its job:

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/tutorial-2.yaml

Just after the creation of the service we can see in the speaker logs that it announces the new external IP address to the router:

{"caller":"main.go:229","event":"serviceAnnounced","ip":"192.168.143.230","msg":"service has IP, announcing","pool":"my-ip-space","protocol":"bgp","service":"default/nginx","ts":"2019-04-25T22:14:52.082805682Z"}
{"caller":"main.go:231","event":"endUpdate","msg":"end of service update","service":"default/nginx","ts":"2019-04-25T22:14:52.082823764Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"default/nginx","ts":"2019-04-25T22:14:49.878731257Z"}
{"caller":"main.go:172","event":"endUpdate","msg":"end of service update","service":"default/nginx","ts":"2019-04-25T22:14:49.878992728Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"default/nginx","ts":"2019-04-25T22:14:49.885773857Z"}
{"caller":"bgp_controller.go:201","event":"updatedAdvertisements","ip":"192.168.143.230","msg":"making advertisements using BGP","numAds":1,"pool":"my-ip-space","protocol":"bgp","service":"default/nginx","ts":"2019-04-25T22:14:49.886003805Z"}

On the router we can see that a new network (external IP address) was added with three paths. Each path is linked to one of the nodes:

R1#show ip route bgp   
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area 
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
       + - replicated route, % - next hop override

Gateway of last resort is not set

      192.168.143.0/32 is subnetted, 1 subnets
B        192.168.143.230 [200/0] via 192.168.121.224, 00:00:15
                         [200/0] via 192.168.121.170, 00:00:15
                         [200/0] via 192.168.121.72, 00:00:15

Using BGP as a load-balancing mechanism allows you to use standard router hardware. However, it comes with downsides as well. You can find out more about this limitations and how to mitigate them here.

Conclusion

MetalLB allows to create Kubernetes Loadbalancer services without the need to deploy your cluster on a cloud platform. MetalLb has two modes of operation: a simple but limited L2 mode which doesn’t need any external hardware or configuration, and a BGP mode which is more robust and production ready, but requires more setup actions on the network side.

Jean-Baptiste-Lasselle commented 4 years ago
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.120-192.168.1.172 # Change the range here
# create the cluster
k3d get clusters
k3d create cluster --help
k3d create cluster jblCluster --api-port 6550  -p 8081:80@master[0]  -p 8091:80@worker[0]  -p 8091:80@worker[1]  -p 8091:80@worker[2] --masters 1 --workers 9
k3d delete cluster jblCluster 

export OPTIONS=" -p 8081:80@master[0]"
for VARIABLE in 1 2 3 4 5 6 7 8 9
do
    export OPTIONS="${OPTIONS} -p 0.0.0.0:$((8099 - ${VARIABLE})):80@worker[$(( ${VARIABLE} - 1 ))]"
done

echo "${OPTIONS}" 

k3d create cluster jblCluster --api-port 6550 --k3s-server-arg "\--tls-san \"192.168.1.28\" \--service-cidr \"192.168.1.0/24\" \--disable servicelb" --masters 1 --workers 9 ${OPTIONS}

# retrieve KUBECONFIG
export KUBECONFIG=$(k3d get kubeconfig jblCluster)
kubectl get all,nodes

# - flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

mkdir -p ./k3s/flannel/

# export DESIRED_FLANNEL_VERSION='2140ac876ef134e0ed5af15c65e414cf26827915'
export DESIRED_FLANNEL_VERSION=v0.12.0
curl -L https://raw.githubusercontent.com/coreos/flannel/${DESIRED_FLANNEL_VERSION}/Documentation/kube-flannel.yml --output ./k3s/flannel/kube-flannel.yml
# https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
# curl -L https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --output ./k3s/flannel/kube-flannel.yml

kubectl apply -f ./k3s/flannel/kube-flannel.yml

# - allow masters to create pods
kubectl taint nodes --all node-role.kubernetes.io/master-

# - Install metal lb
export DESIRED_METALLB_VERSION='v0.8.1'
mkdir -p ./k3s/metallb/

curl -L https://raw.githubusercontent.com/google/metallb/${DESIRED_METALLB_VERSION}/manifests/metallb.yaml --output ./k3s/metallb/metallb.yaml
kubectl apply -f ./k3s/metallb/metallb.yaml
# Add a configmap to customize/override Metallb configuration file in pods
kubectl apply -f ./k3s/metallb/metallb.configmap.yaml

kubectl get all -n metallb-system
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app: metallb
  name: metallb-system
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  labels:
    app: metallb
  name: speaker
  namespace: metallb-system
spec:
  allowPrivilegeEscalation: false
  allowedCapabilities:
  - NET_ADMIN
  - NET_RAW
  - SYS_ADMIN
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 7472
    min: 7472
  privileged: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: metallb
  name: controller
  namespace: metallb-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: metallb
  name: speaker
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: metallb
  name: metallb-system:controller
rules:
- apiGroups:
  - ''
  resources:
  - services
  verbs:
  - get
  - list
  - watch
  - update
- apiGroups:
  - ''
  resources:
  - services/status
  verbs:
  - update
- apiGroups:
  - ''
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: metallb
  name: metallb-system:speaker
rules:
- apiGroups:
  - ''
  resources:
  - services
  - endpoints
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ''
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - extensions
  resourceNames:
  - speaker
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app: metallb
  name: config-watcher
  namespace: metallb-system
rules:
- apiGroups:
  - ''
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: metallb
  name: metallb-system:controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:controller
subjects:
- kind: ServiceAccount
  name: controller
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: metallb
  name: metallb-system:speaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:speaker
subjects:
- kind: ServiceAccount
  name: speaker
  namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app: metallb
  name: config-watcher
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: config-watcher
subjects:
- kind: ServiceAccount
  name: controller
- kind: ServiceAccount
  name: speaker
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: metallb
    component: speaker
  name: speaker
  namespace: metallb-system
spec:
  selector:
    matchLabels:
      app: metallb
      component: speaker
  template:
    metadata:
      annotations:
        prometheus.io/port: '7472'
        prometheus.io/scrape: 'true'
      labels:
        app: metallb
        component: speaker
    spec:
      containers:
      - args:
        - --port=7472
        - --config=config
        env:
        - name: METALLB_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: METALLB_HOST
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        image: metallb/speaker:v0.8.1
        imagePullPolicy: IfNotPresent
        name: speaker
        ports:
        - containerPort: 7472
          name: monitoring
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
            - SYS_ADMIN
            drop:
            - ALL
          readOnlyRootFilesystem: true
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      serviceAccountName: speaker
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: metallb
    component: controller
  name: controller
  namespace: metallb-system
spec:
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: metallb
      component: controller
  template:
    metadata:
      annotations:
        prometheus.io/port: '7472'
        prometheus.io/scrape: 'true'
      labels:
        app: metallb
        component: controller
    spec:
      containers:
      - args:
        - --port=7472
        - --config=config
        image: metallb/controller:v0.8.1
        imagePullPolicy: IfNotPresent
        name: controller
        ports:
        - containerPort: 7472
          name: monitoring
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
          readOnlyRootFilesystem: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: controller
      terminationGracePeriodSeconds: 0
Jean-Baptiste-Lasselle commented 4 years ago

Ok, I got the cluster, with network configuration, and now let's deploys something that has a Service of LoadBalancer type, like traefik ingress controller :

git clone https://github.com/containous/traefik cheesie/ && cd cheesie/ && git checkout v1.7 && cd ../

kubectl apply -f cheesie/examples/k8s/traefik-deployment.yaml
kubectl apply -f cheesie/examples/k8s/traefik-rbac.yaml 
kubectl apply -f cheesie/examples/k8s/ui.yaml 

# And a simple app deployment to kubernetes there

sed -i "s#extensions/v1beta1#apps/v1#g" cheesie/examples/k8s/cheese-deployments.yaml
kubectl create namespace cheese 

ls cheesie/examples/k8s/cheese-*.yaml > cheese.deploy.list

cat cheese.deploy.list |  while IFS=" " read manifest; do kubectl apply -f "$manifest"; done

have a look :

jbl@pegasusio:~$ kubectl get all --all-namespaces|grep traefik
kube-system      pod/helm-install-traefik-kmwmw                    0/1     Completed   0          48m
kube-system      pod/svclb-traefik-2jh8f                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-m6hj9                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-lrvkg                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-h262z                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-89qv8                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-rll9l                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-mxxjr                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-lgw2z                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-r2c4b                           2/2     Running     0          48m
kube-system      pod/svclb-traefik-69rhs                           2/2     Running     0          48m
kube-system      pod/traefik-758cd5fc85-vc92g                      1/1     Running     0          48m
kube-system      pod/traefik-ingress-controller-78b4959fdf-l5t9t   1/1     Running     0          9m27s
kube-system   service/traefik-prometheus        ClusterIP      10.43.247.3     <none>          9100/TCP                      48m
kube-system   service/traefik-ingress-service   NodePort       10.43.133.21    <none>          80:30341/TCP,8080:32710/TCP   9m27s
kube-system   service/traefik-web-ui            ClusterIP      10.43.110.53    <none>          80/TCP                        9m14s
kube-system   service/traefik                   LoadBalancer   10.43.125.173   192.168.1.120   80:32467/TCP,443:30042/TCP    48m
kube-system      daemonset.apps/svclb-traefik             10        10        10      10           10          <none>                        48m
kube-system      deployment.apps/traefik                      1/1     1            1           48m
kube-system      deployment.apps/traefik-ingress-controller   1/1     1            1           9m27s
kube-system      replicaset.apps/traefik-758cd5fc85                      1         1         1       48m
kube-system      replicaset.apps/traefik-ingress-controller-78b4959fdf   1         1         1       9m27s
kube-system   job.batch/helm-install-traefik   1/1           20s        48m
jbl@pegasusio:~$ docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                            NAMES
e7b8375b0e87        rancher/k3d-proxy:v3.0.0-rc.6   "/bin/sh -c nginx-pr…"   49 minutes ago      Up 49 minutes       80/tcp, 0.0.0.0:6550->6443/tcp   k3d-jblCluster-masterlb
4fede0a08938        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8090->80/tcp             k3d-jblCluster-worker-8
da789e45801f        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8091->80/tcp             k3d-jblCluster-worker-7
8b5cce81f90d        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8092->80/tcp             k3d-jblCluster-worker-6
d50d8ec7ba70        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8093->80/tcp             k3d-jblCluster-worker-5
f73faf6e6725        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8094->80/tcp             k3d-jblCluster-worker-4
e926b5d67b46        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8095->80/tcp             k3d-jblCluster-worker-3
c08a9bad0011        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8096->80/tcp             k3d-jblCluster-worker-2
2378444a14b9        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8097->80/tcp             k3d-jblCluster-worker-1
fe0f5412620e        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         49 minutes ago      Up 49 minutes       0.0.0.0:8098->80/tcp             k3d-jblCluster-worker-0
7c9763ba8c63        rancher/k3s:v1.18.4-k3s1        "/bin/k3s server --t…"   49 minutes ago      Up 49 minutes       0.0.0.0:8081->80/tcp             k3d-jblCluster-master-0
jbl@pegasusio:~$ ping -c 4 192.168.1.120
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
From 192.168.1.35 icmp_seq=1 Destination Host Unreachable
From 192.168.1.35 icmp_seq=2 Destination Host Unreachable
From 192.168.1.35 icmp_seq=3 Destination Host Unreachable
^C
--- 192.168.1.120 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3077ms
pipe 4
jbl@pegasusio:~$ ping -c 4 192.168.1.35
PING 192.168.1.35 (192.168.1.35) 56(84) bytes of data.
64 bytes from 192.168.1.35: icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from 192.168.1.35: icmp_seq=2 ttl=64 time=0.035 ms
^C
--- 192.168.1.35 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1033ms
rtt min/avg/max/mdev = 0.035/0.039/0.044/0.007 ms
jbl@pegasusio:~$ curl -I http://pegasusio.io:8094/
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 517
Content-Type: text/html
Date: Thu, 16 Jul 2020 11:49:05 GMT
Etag: "5784f6c9-205"
Last-Modified: Tue, 12 Jul 2016 13:55:21 GMT
Server: nginx/1.11.1
Vary: Accept-Encoding

jbl@pegasusio:~$ kubectl get all,ingresses --all-namespaces|grep traefik
kube-system      pod/helm-install-traefik-kmwmw                    0/1     Completed   0          52m
kube-system      pod/svclb-traefik-2jh8f                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-m6hj9                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-lrvkg                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-h262z                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-89qv8                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-rll9l                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-mxxjr                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-lgw2z                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-r2c4b                           2/2     Running     0          51m
kube-system      pod/svclb-traefik-69rhs                           2/2     Running     0          51m
kube-system      pod/traefik-758cd5fc85-vc92g                      1/1     Running     0          51m
kube-system      pod/traefik-ingress-controller-78b4959fdf-l5t9t   1/1     Running     0          13m
kube-system   service/traefik-prometheus        ClusterIP      10.43.247.3     <none>          9100/TCP                      51m
kube-system   service/traefik-ingress-service   NodePort       10.43.133.21    <none>          80:30341/TCP,8080:32710/TCP   13m
kube-system   service/traefik-web-ui            ClusterIP      10.43.110.53    <none>          80/TCP                        13m
kube-system   service/traefik                   LoadBalancer   10.43.125.173   192.168.1.120   80:32467/TCP,443:30042/TCP    51m
kube-system      daemonset.apps/svclb-traefik             10        10        10      10           10          <none>                        51m
kube-system      deployment.apps/traefik                      1/1     1            1           51m
kube-system      deployment.apps/traefik-ingress-controller   1/1     1            1           13m
kube-system      replicaset.apps/traefik-758cd5fc85                      1         1         1       51m
kube-system      replicaset.apps/traefik-ingress-controller-78b4959fdf   1         1         1       13m
kube-system   job.batch/helm-install-traefik   1/1           20s        52m
kube-system   ingress.extensions/traefik-web-ui   <none>   traefik-ui.minikube                                      192.168.1.120   80      13m
jbl@pegasusio:~$ docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                            NAMES
e7b8375b0e87        rancher/k3d-proxy:v3.0.0-rc.6   "/bin/sh -c nginx-pr…"   54 minutes ago      Up 54 minutes       80/tcp, 0.0.0.0:6550->6443/tcp   k3d-jblCluster-masterlb
4fede0a08938        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8090->80/tcp             k3d-jblCluster-worker-8
da789e45801f        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8091->80/tcp             k3d-jblCluster-worker-7
8b5cce81f90d        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8092->80/tcp             k3d-jblCluster-worker-6
d50d8ec7ba70        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8093->80/tcp             k3d-jblCluster-worker-5
f73faf6e6725        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8094->80/tcp             k3d-jblCluster-worker-4
e926b5d67b46        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8095->80/tcp             k3d-jblCluster-worker-3
c08a9bad0011        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8096->80/tcp             k3d-jblCluster-worker-2
2378444a14b9        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8097->80/tcp             k3d-jblCluster-worker-1
fe0f5412620e        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         54 minutes ago      Up 54 minutes       0.0.0.0:8098->80/tcp             k3d-jblCluster-worker-0
7c9763ba8c63        rancher/k3s:v1.18.4-k3s1        "/bin/k3s server --t…"   54 minutes ago      Up 54 minutes       0.0.0.0:8081->80/tcp             k3d-jblCluster-master-0
jbl@pegasusio:~$ hostname
pegasusio
jbl@pegasusio:~$ ping -c 4 pegasusio
PING pegasusio.io (127.0.1.1) 56(84) bytes of data.
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=4 ttl=64 time=0.050 ms

--- pegasusio.io ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3067ms
rtt min/avg/max/mdev = 0.044/0.050/0.055/0.008 ms
jbl@pegasusio:~$ ip addr | grep 168
    inet 192.168.1.35/24 brd 192.168.1.255 scope global dynamic enp0s8
jbl@pegasusio:~$ ping -c 4 pegasusio.io
PING pegasusio.io (127.0.1.1) 56(84) bytes of data.
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=3 ttl=64 time=0.051 ms
64 bytes from pegasusio.io (127.0.1.1): icmp_seq=4 ttl=64 time=0.046 ms

--- pegasusio.io ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3064ms
rtt min/avg/max/mdev = 0.032/0.044/0.051/0.011 ms
jbl@pegasusio:~$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   pegasusio.io    pegasusio

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
jbl@pegasusio:~$ curl  http://pegasusio.io:8095/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ curl  http://pegasusio.io:8094/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ curl  http://pegasusio.io:8092/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ 
jbl@pegasusio:~$ # 192.168.1.35 cheddar.minikube
jbl@pegasusio:~$ sudo vi /etc/hosts
jbl@pegasusio:~$ curl  http://cheddar.minikube:8092/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
jbl@pegasusio:~$ curl  http://cheddar.minikube:8091/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
jbl@pegasusio:~$ curl  http://cheddar.minikube:8092/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
jbl@pegasusio:~$ curl  http://cheddar.minikube:8093/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
jbl@pegasusio:~$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   pegasusio.io    pegasusio

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

# test deployments

192.168.1.35 cheddar.minikube
jbl@pegasusio:~$ kubectl get all,ingresses --all-namespaces|grep traefik
kube-system      pod/helm-install-traefik-kmwmw                    0/1     Completed   0          79m
kube-system      pod/svclb-traefik-2jh8f                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-m6hj9                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-lrvkg                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-h262z                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-89qv8                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-rll9l                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-mxxjr                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-lgw2z                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-r2c4b                           2/2     Running     0          78m
kube-system      pod/svclb-traefik-69rhs                           2/2     Running     0          78m
kube-system      pod/traefik-758cd5fc85-vc92g                      1/1     Running     0          78m
kube-system      pod/traefik-ingress-controller-78b4959fdf-l5t9t   1/1     Running     0          40m
kube-system   service/traefik-prometheus        ClusterIP      10.43.247.3     <none>          9100/TCP                      78m
kube-system   service/traefik-ingress-service   NodePort       10.43.133.21    <none>          80:30341/TCP,8080:32710/TCP   40m
kube-system   service/traefik-web-ui            ClusterIP      10.43.110.53    <none>          80/TCP                        39m
kube-system   service/traefik                   LoadBalancer   10.43.125.173   192.168.1.120   80:32467/TCP,443:30042/TCP    78m
kube-system      daemonset.apps/svclb-traefik             10        10        10      10           10          <none>                        78m
kube-system      deployment.apps/traefik                      1/1     1            1           78m
kube-system      deployment.apps/traefik-ingress-controller   1/1     1            1           40m
kube-system      replicaset.apps/traefik-758cd5fc85                      1         1         1       78m
kube-system      replicaset.apps/traefik-ingress-controller-78b4959fdf   1         1         1       40m
kube-system   job.batch/helm-install-traefik   1/1           20s        79m
kube-system   ingress.extensions/traefik-web-ui   <none>   traefik-ui.minikube                                      192.168.1.120   80      39m
jbl@pegasusio:~$ dockr ps -a
bash: dockr : commande introuvable
jbl@pegasusio:~$ docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                            NAMES
e7b8375b0e87        rancher/k3d-proxy:v3.0.0-rc.6   "/bin/sh -c nginx-pr…"   About an hour ago   Up About an hour    80/tcp, 0.0.0.0:6550->6443/tcp   k3d-jblCluster-masterlb
4fede0a08938        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8090->80/tcp             k3d-jblCluster-worker-8
da789e45801f        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8091->80/tcp             k3d-jblCluster-worker-7
8b5cce81f90d        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8092->80/tcp             k3d-jblCluster-worker-6
d50d8ec7ba70        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8093->80/tcp             k3d-jblCluster-worker-5
f73faf6e6725        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8094->80/tcp             k3d-jblCluster-worker-4
e926b5d67b46        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8095->80/tcp             k3d-jblCluster-worker-3
c08a9bad0011        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8096->80/tcp             k3d-jblCluster-worker-2
2378444a14b9        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8097->80/tcp             k3d-jblCluster-worker-1
fe0f5412620e        rancher/k3s:v1.18.4-k3s1        "/bin/k3s agent"         About an hour ago   Up About an hour    0.0.0.0:8098->80/tcp             k3d-jblCluster-worker-0
7c9763ba8c63        rancher/k3s:v1.18.4-k3s1        "/bin/k3s server --t…"   About an hour ago   Up About an hour    0.0.0.0:8081->80/tcp             k3d-jblCluster-master-0
# ---
kubectl run gravitee-init-job --image=debian:buster-slim -i --tty --rm
mkdir -m test/compose.ping/

export CLUSTER_DOCKER_NETWORK="k3d-jblCluster"

cat <<EOF >>test/compose.ping/docker-compose.yml
version: '3.5'

# network is already created by k3d create cluster command, but we still 
# have to mention it here, to attach to hthe same docker network
# 
networks:
  ${CLUSTER_DOCKER_NETWORK}:
    name: ${CLUSTER_DOCKER_NETWORK}

services:
  k3s_nettester:
    image: debian:buster-slim
    # command: apt-get update -y && apt-get install -y jq curl dnsutils wget iputils-ping && /bin/bash
    command: /bin/bash
    stdin_open: true
    tty: true
    # ports:
      # only expose https to outside world
      # - "443:443"   # SSL
      # - 8080:8080 #  traefik dashboard
    # volumes:
      # - "$PWD/some/where/to/config/example.toml:/etc/k3s_nettester/config.toml"
    networks:
       ${CLUSTER_DOCKER_NETWORK}:
        aliases:
          - k3s_nettester.pegasusio.io
    extra_hosts:
      - "pegasusio.io:192.168.1.35"
EOF

docker-compose up -d -f test/compose.ping/docker-compose.yml
docker exec -it composeping_k3s_nettester_1 bash -c "apt-get update -y && apt-get install -y jq curl dnsutils wget"
docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 192.168.1.120"
docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 pegasusio.io"
docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 google.com"
docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 192.168.1.15"

docker exec -it composeping_k3s_nettester_1 bash
jbl@pegasusio:~$ docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 192.168.1.120"
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
From 192.168.1.35 icmp_seq=1 Destination Host Unreachable
From 192.168.1.35 icmp_seq=2 Destination Host Unreachable
From 192.168.1.35 icmp_seq=3 Destination Host Unreachable
From 192.168.1.35 icmp_seq=4 Destination Host Unreachable

--- 192.168.1.120 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 71ms
pipe 4
jbl@pegasusio:~$ docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 192.168.1.35"
PING 192.168.1.35 (192.168.1.35) 56(84) bytes of data.
64 bytes from 192.168.1.35: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 192.168.1.35: icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from 192.168.1.35: icmp_seq=3 ttl=64 time=0.069 ms
64 bytes from 192.168.1.35: icmp_seq=4 ttl=64 time=0.052 ms

--- 192.168.1.35 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 75ms
rtt min/avg/max/mdev = 0.052/0.065/0.073/0.007 ms
jbl@pegasusio:~$ docker exec -it composeping_k3s_nettester_1 bash -c "ping -c google.com"
ping: bad number of packets to transmit.
jbl@pegasusio:~$ docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 google.com"
PING google.com (216.58.209.238) 56(84) bytes of data.
64 bytes from par10s29-in-f238.1e100.net (216.58.209.238): icmp_seq=1 ttl=114 time=2.99 ms
64 bytes from par10s29-in-f238.1e100.net (216.58.209.238): icmp_seq=2 ttl=114 time=2.94 ms
64 bytes from par10s29-in-f238.1e100.net (216.58.209.238): icmp_seq=3 ttl=114 time=2.92 ms
64 bytes from par10s29-in-f238.1e100.net (216.58.209.238): icmp_seq=4 ttl=114 time=2.100 ms

--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7ms
rtt min/avg/max/mdev = 2.918/2.959/2.995/0.074 ms
jbl@pegasusio:~$ docker exec -it composeping_k3s_nettester_1 bash -c "ping -c 4 k3d-jblCluster-worker-5"
PING k3d-jblCluster-worker-5 (172.24.0.8) 56(84) bytes of data.
64 bytes from k3d-jblCluster-worker-5.k3d-jblCluster (172.24.0.8): icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from k3d-jblCluster-worker-5.k3d-jblCluster (172.24.0.8): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from k3d-jblCluster-worker-5.k3d-jblCluster (172.24.0.8): icmp_seq=3 ttl=64 time=0.072 ms
64 bytes from k3d-jblCluster-worker-5.k3d-jblCluster (172.24.0.8): icmp_seq=4 ttl=64 time=0.079 ms

--- k3d-jblCluster-worker-5 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 37ms
rtt min/avg/max/mdev = 0.072/0.113/0.228/0.066 ms
jbl@pegasusio:~$ 
Jean-Baptiste-Lasselle commented 4 years ago
jbl@pegasusio:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS              RESTARTS   AGE
kube-system   pod/local-path-provisioner-6d59f47c7-ndk49   1/1     Running             0          26s
kube-system   pod/metrics-server-7566d596c8-5cfj4          1/1     Running             0          26s
kube-system   pod/traefik-758cd5fc85-7vz49                 0/1     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-2bvrx                      0/2     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-l24hh                      0/2     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-qp7p9                      0/2     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-qd2pl                      0/2     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-mx5jc                      0/2     ContainerCreating   0          8s
kube-system   pod/svclb-traefik-kwvl7                      0/2     ContainerCreating   0          8s
kube-system   pod/helm-install-traefik-h5l4g               0/1     Completed           0          26s
kube-system   pod/coredns-8655855d6-d9ffr                  1/1     Running             0          26s
kube-system   pod/svclb-traefik-phq4c                      2/2     Running             0          8s
kube-system   pod/svclb-traefik-zxg7n                      2/2     Running             0          8s
kube-system   pod/svclb-traefik-lt5jl                      2/2     Running             0          8s
kube-system   pod/svclb-traefik-846dn                      2/2     Running             0          8s

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>           443/TCP                      40s
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       38s
kube-system   service/metrics-server       ClusterIP      10.43.232.77    <none>           443/TCP                      38s
kube-system   service/traefik-prometheus   ClusterIP      10.43.27.135    <none>           9100/TCP                     8s
kube-system   service/traefik              LoadBalancer   10.43.242.171   192.168.128.10   80:30734/TCP,443:32481/TCP   8s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   10        10        4       10           4           <none>          8s

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           38s
kube-system   deployment.apps/metrics-server           1/1     1            1           38s
kube-system   deployment.apps/traefik                  0/1     1            0           8s
kube-system   deployment.apps/coredns                  1/1     1            1           38s

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-6d59f47c7   1         1         1       26s
kube-system   replicaset.apps/metrics-server-7566d596c8          1         1         1       26s
kube-system   replicaset.apps/traefik-758cd5fc85                 1         1         0       8s
kube-system   replicaset.apps/coredns-8655855d6                  1         1         1       26s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           20s        37s
jbl@pegasusio:~$ ping -c 4 192.168.128.10
PING 192.168.128.10 (192.168.128.10) 56(84) bytes of data.
64 bytes from 192.168.128.10: icmp_seq=1 ttl=64 time=0.085 ms
64 bytes from 192.168.128.10: icmp_seq=2 ttl=64 time=0.086 ms
64 bytes from 192.168.128.10: icmp_seq=3 ttl=64 time=0.069 ms
64 bytes from 192.168.128.10: icmp_seq=4 ttl=64 time=0.085 ms

--- 192.168.128.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3071ms
rtt min/avg/max/mdev = 0.069/0.081/0.086/0.009 ms
jbl@pegasusio:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-6d59f47c7-ndk49   1/1     Running     0          100s
kube-system   pod/metrics-server-7566d596c8-5cfj4          1/1     Running     0          100s
kube-system   pod/helm-install-traefik-h5l4g               0/1     Completed   0          100s
kube-system   pod/coredns-8655855d6-d9ffr                  1/1     Running     0          100s
kube-system   pod/svclb-traefik-phq4c                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-zxg7n                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-lt5jl                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-846dn                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-2bvrx                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-kwvl7                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-mx5jc                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-l24hh                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-qd2pl                      2/2     Running     0          82s
kube-system   pod/svclb-traefik-qp7p9                      2/2     Running     0          82s
kube-system   pod/traefik-758cd5fc85-7vz49                 1/1     Running     0          82s

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>           443/TCP                      114s
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       112s
kube-system   service/metrics-server       ClusterIP      10.43.232.77    <none>           443/TCP                      112s
kube-system   service/traefik-prometheus   ClusterIP      10.43.27.135    <none>           9100/TCP                     82s
kube-system   service/traefik              LoadBalancer   10.43.242.171   192.168.128.11   80:30734/TCP,443:32481/TCP   82s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   10        10        10      10           10          <none>          82s

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           112s
kube-system   deployment.apps/metrics-server           1/1     1            1           112s
kube-system   deployment.apps/coredns                  1/1     1            1           112s
kube-system   deployment.apps/traefik                  1/1     1            1           82s

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-6d59f47c7   1         1         1       100s
kube-system   replicaset.apps/metrics-server-7566d596c8          1         1         1       100s
kube-system   replicaset.apps/coredns-8655855d6                  1         1         1       100s
kube-system   replicaset.apps/traefik-758cd5fc85                 1         1         1       82s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           20s        111s
jbl@pegasusio:~$ ip addr | grep 168
    inet 192.168.1.35/24 brd 192.168.1.255 scope global dynamic enp0s8
    inet 192.168.128.1/20 brd 192.168.143.255 scope global br-bcaffb501abb
jbl@pegasusio:~$ # sudo iptables -P FORWARD ACCEPT
jbl@pegasusio:~$ ping -c 4 192.168.128.10
PING 192.168.128.10 (192.168.128.10) 56(84) bytes of data.
64 bytes from 192.168.128.10: icmp_seq=1 ttl=64 time=0.156 ms
64 bytes from 192.168.128.10: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 192.168.128.10: icmp_seq=3 ttl=64 time=0.083 ms
64 bytes from 192.168.128.10: icmp_seq=4 ttl=64 time=0.081 ms

--- 192.168.128.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.068/0.097/0.156/0.034 ms
jbl@pegasusio:~$ ping -c 4 192.168.128.11
PING 192.168.128.11 (192.168.128.11) 56(84) bytes of data.
64 bytes from 192.168.128.11: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from 192.168.128.11: icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from 192.168.128.11: icmp_seq=3 ttl=64 time=0.058 ms
64 bytes from 192.168.128.11: icmp_seq=4 ttl=64 time=0.078 ms

--- 192.168.128.11 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3076ms
rtt min/avg/max/mdev = 0.058/0.094/0.153/0.035 ms
jbl@pegasusio:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-6d59f47c7-ndk49   1/1     Running     0          7m18s
kube-system   pod/metrics-server-7566d596c8-5cfj4          1/1     Running     0          7m18s
kube-system   pod/helm-install-traefik-h5l4g               0/1     Completed   0          7m18s
kube-system   pod/coredns-8655855d6-d9ffr                  1/1     Running     0          7m18s
kube-system   pod/svclb-traefik-phq4c                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-zxg7n                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-lt5jl                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-846dn                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-2bvrx                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-kwvl7                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-mx5jc                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-l24hh                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-qd2pl                      2/2     Running     0          7m
kube-system   pod/svclb-traefik-qp7p9                      2/2     Running     0          7m
kube-system   pod/traefik-758cd5fc85-7vz49                 1/1     Running     0          7m

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>           443/TCP                      7m32s
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       7m30s
kube-system   service/metrics-server       ClusterIP      10.43.232.77    <none>           443/TCP                      7m30s
kube-system   service/traefik-prometheus   ClusterIP      10.43.27.135    <none>           9100/TCP                     7m
kube-system   service/traefik              LoadBalancer   10.43.242.171   192.168.128.11   80:30734/TCP,443:32481/TCP   7m

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   10        10        10      10           10          <none>          7m

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           7m30s
kube-system   deployment.apps/metrics-server           1/1     1            1           7m30s
kube-system   deployment.apps/coredns                  1/1     1            1           7m30s
kube-system   deployment.apps/traefik                  1/1     1            1           7m

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-6d59f47c7   1         1         1       7m18s
kube-system   replicaset.apps/metrics-server-7566d596c8          1         1         1       7m18s
kube-system   replicaset.apps/coredns-8655855d6                  1         1         1       7m18s
kube-system   replicaset.apps/traefik-758cd5fc85                 1         1         1       7m

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           20s        7m29s
jbl@pegasusio:~$ ping -c 4 192.168.128.11
PING 192.168.128.11 (192.168.128.11) 56(84) bytes of data.
64 bytes from 192.168.128.11: icmp_seq=1 ttl=64 time=0.173 ms
64 bytes from 192.168.128.11: icmp_seq=2 ttl=64 time=0.078 ms
64 bytes from 192.168.128.11: icmp_seq=3 ttl=64 time=0.092 ms
64 bytes from 192.168.128.11: icmp_seq=4 ttl=64 time=0.070 ms

--- 192.168.128.11 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3062ms
rtt min/avg/max/mdev = 0.070/0.103/0.173/0.041 ms
jbl@pegasusio:~$ ping -c 4 192.168.128.10
PING 192.168.128.10 (192.168.128.10) 56(84) bytes of data.
64 bytes from 192.168.128.10: icmp_seq=1 ttl=64 time=0.144 ms
64 bytes from 192.168.128.10: icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from 192.168.128.10: icmp_seq=3 ttl=64 time=0.116 ms
64 bytes from 192.168.128.10: icmp_seq=4 ttl=64 time=0.084 ms

--- 192.168.128.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3070ms
rtt min/avg/max/mdev = 0.056/0.100/0.144/0.033 ms
jbl@pegasusio:~$ ping -c 4 192.168.128.10
PING 192.168.128.10 (192.168.128.10) 56(84) bytes of data.
64 bytes from 192.168.128.10: icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from 192.168.128.10: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 192.168.128.10: icmp_seq=3 ttl=64 time=0.084 ms
^C
--- 192.168.128.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2034ms
rtt min/avg/max/mdev = 0.075/0.102/0.147/0.032 ms
# -- just traefik cheese apps only traefik's already there
git clone https://github.com/containous/traefik cheesie/ && cd cheesie/ && git checkout v1.7 && cd ../

# kubectl apply -f cheesie/examples/k8s/traefik-deployment.yaml
# kubectl apply -f cheesie/examples/k8s/traefik-rbac.yaml 
# kubectl apply -f cheesie/examples/k8s/ui.yaml 

# And a simple app deployment to kubernetes there

sed -i "s#extensions/v1beta1#apps/v1#g" cheesie/examples/k8s/cheese-deployments.yaml

kubectl create namespace cheese 

ls cheesie/examples/k8s/cheese-*.yaml > cheese.deploy.list

cat cheese.deploy.list |  while IFS=" " read manifest; do kubectl apply -f "$manifest"; done
jbl@pegasusio:~$ curl -vvvik https://cheddar.minikube:443/
*   Trying 192.168.128.10...
* TCP_NODELAY set
* Connected to cheddar.minikube (192.168.128.10) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=Colorado; L=Boulder; O=ExampleCorp; OU=IT; CN=*.example.com; emailAddress=admin@example.com
*  start date: Oct 24 21:09:52 2016 GMT
*  expire date: Oct 24 21:09:52 2017 GMT
*  issuer: C=US; ST=Colorado; L=Boulder; O=ExampleCorp; OU=IT; CN=*.example.com; emailAddress=admin@example.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55748e82cea0)
> GET / HTTP/1.1
> Host: cheddar.minikube
> User-Agent: curl/7.52.1
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200 
HTTP/2 200 
< accept-ranges: bytes
accept-ranges: bytes
< content-type: text/html
content-type: text/html
< date: Thu, 16 Jul 2020 17:28:38 GMT
date: Thu, 16 Jul 2020 17:28:38 GMT
< etag: "5784f6e1-205"
etag: "5784f6e1-205"
< last-modified: Tue, 12 Jul 2016 13:55:45 GMT
last-modified: Tue, 12 Jul 2016 13:55:45 GMT
< server: nginx/1.11.1
server: nginx/1.11.1
< vary: Accept-Encoding
vary: Accept-Encoding
< content-length: 517
content-length: 517

< 
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host cheddar.minikube left intact
jbl@pegasusio:~$ curl -vvvi http://cheddar.minikube:80/
*   Trying 192.168.128.10...
* TCP_NODELAY set
* Connected to cheddar.minikube (192.168.128.10) port 80 (#0)
> GET / HTTP/1.1
> Host: cheddar.minikube
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Accept-Ranges: bytes
Accept-Ranges: bytes
< Content-Length: 517
Content-Length: 517
< Content-Type: text/html
Content-Type: text/html
< Date: Thu, 16 Jul 2020 17:28:44 GMT
Date: Thu, 16 Jul 2020 17:28:44 GMT
< Etag: "5784f6e1-205"
Etag: "5784f6e1-205"
< Last-Modified: Tue, 12 Jul 2016 13:55:45 GMT
Last-Modified: Tue, 12 Jul 2016 13:55:45 GMT
< Server: nginx/1.11.1
Server: nginx/1.11.1
< Vary: Accept-Encoding
Vary: Accept-Encoding

< 
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host cheddar.minikube left intact
jbl@pegasusio:~$ curl -i http://192.168.128.10/
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 517
Content-Type: text/html
Date: Thu, 16 Jul 2020 17:23:54 GMT
Etag: "5784f6c9-205"
Last-Modified: Tue, 12 Jul 2016 13:55:21 GMT
Server: nginx/1.11.1
Vary: Accept-Encoding

<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ curl -ik https://192.168.128.10:443/
HTTP/2 200 
accept-ranges: bytes
content-type: text/html
date: Thu, 16 Jul 2020 17:23:57 GMT
etag: "5784f6c9-205"
last-modified: Tue, 12 Jul 2016 13:55:21 GMT
server: nginx/1.11.1
vary: Accept-Encoding
content-length: 517

<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ cat /etc/hosts
127.0.0.1   localhost
127.0.1.1   pegasusio.io    pegasusio

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

# test deployments

# 192.168.1.35 cheddar.minikube
192.168.128.10 cheddar.minikube 

jbl@pegasusio:~$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-6d59f47c7-ndk49   1/1     Running     0          56m
kube-system   pod/metrics-server-7566d596c8-5cfj4          1/1     Running     0          56m
kube-system   pod/helm-install-traefik-h5l4g               0/1     Completed   0          56m
kube-system   pod/coredns-8655855d6-d9ffr                  1/1     Running     0          56m
kube-system   pod/svclb-traefik-phq4c                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-zxg7n                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-lt5jl                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-846dn                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-2bvrx                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-kwvl7                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-mx5jc                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-l24hh                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-qd2pl                      2/2     Running     0          56m
kube-system   pod/svclb-traefik-qp7p9                      2/2     Running     0          56m
kube-system   pod/traefik-758cd5fc85-7vz49                 1/1     Running     0          56m
default       pod/wensleydale-59845bc76d-5v57b             1/1     Running     0          3m13s
default       pod/cheddar-ff749fb44-6sm2s                  1/1     Running     0          3m13s
default       pod/stilton-597766648d-rmh8h                 1/1     Running     0          3m13s
default       pod/cheddar-ff749fb44-q7jct                  1/1     Running     0          3m13s
default       pod/wensleydale-59845bc76d-6m5h9             1/1     Running     0          3m13s
default       pod/stilton-597766648d-dwjjw                 1/1     Running     0          3m13s

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>           443/TCP                      56m
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       56m
kube-system   service/metrics-server       ClusterIP      10.43.232.77    <none>           443/TCP                      56m
kube-system   service/traefik-prometheus   ClusterIP      10.43.27.135    <none>           9100/TCP                     56m
kube-system   service/traefik              LoadBalancer   10.43.242.171   192.168.128.11   80:30734/TCP,443:32481/TCP   56m
default       service/stilton              ClusterIP      10.43.255.234   <none>           80/TCP                       3m13s
default       service/cheddar              ClusterIP      10.43.66.195    <none>           80/TCP                       3m12s
default       service/wensleydale          ClusterIP      10.43.156.205   <none>           80/TCP                       3m12s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   10        10        10      10           10          <none>          56m

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           56m
kube-system   deployment.apps/metrics-server           1/1     1            1           56m
kube-system   deployment.apps/coredns                  1/1     1            1           56m
kube-system   deployment.apps/traefik                  1/1     1            1           56m
default       deployment.apps/cheddar                  2/2     2            2           3m13s
default       deployment.apps/wensleydale              2/2     2            2           3m13s
default       deployment.apps/stilton                  2/2     2            2           3m13s

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-6d59f47c7   1         1         1       56m
kube-system   replicaset.apps/metrics-server-7566d596c8          1         1         1       56m
kube-system   replicaset.apps/coredns-8655855d6                  1         1         1       56m
kube-system   replicaset.apps/traefik-758cd5fc85                 1         1         1       56m
default       replicaset.apps/cheddar-ff749fb44                  2         2         2       3m13s
default       replicaset.apps/wensleydale-59845bc76d             2         2         2       3m13s
default       replicaset.apps/stilton-597766648d                 2         2         2       3m13s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           20s        56m
jbl@pegasusio:~$ 
Jean-Baptiste-Lasselle commented 4 years ago

ok, so now i tested also that after I installed flannl and metallb, then the LoadBalancer Service Type gets IP Addresses from metallb and those IP addresses are not pingable like the previously created

Jean-Baptiste-Lasselle commented 4 years ago

Default k3s Load Balancer and metallb

Quoting the rancher docs https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer :

K3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available, the load balancer will stay in Pending.

To disable the embedded load balancer, run the server with the --disable servicelb option. This is necessary if you wish to run a different load balancer, such as MetalLB.

Ohhhh So okay, I have to disable default k3s load balancer, so that Metallb works ^^

And I learned something else, real important : among the docker containers created for my k3d cluster, I had one called masterlb, which simply is that default load-balancer (and the n I can try agaun metallb and k3s

finnnneeeee :D

smthg else

http://docs.openvswitch.org/en/latest/faq/configuration/ :

Jean-Baptiste-Lasselle commented 4 years ago
k3d create cluster jblCluster3 --k3s-server-arg "\--disable servicelb"  --k3s-server-arg "\--service-cidr 192.168.1.0/24" --k3s-server-arg "\--cluster-cidr 192.169.12.0/24" --api-port 6553 --masters 1 --workers 9
jbl@pegasusio:~$ k3d create cluster jblCluster2 --k3s-server-arg "\--disable servicelb" --api-port 6555 --masters 1 --workers 9INFO[0000] Created network 'k3d-jblCluster2'            
INFO[0000] Created volume 'k3d-jblCluster2-images'      
INFO[0001] Creating node 'k3d-jblCluster2-master-0'     
INFO[0001] Creating node 'k3d-jblCluster2-worker-0'     
INFO[0002] Creating node 'k3d-jblCluster2-worker-1'     
INFO[0003] Creating node 'k3d-jblCluster2-worker-2'     
INFO[0003] Creating node 'k3d-jblCluster2-worker-3'     
INFO[0004] Creating node 'k3d-jblCluster2-worker-4'     
INFO[0005] Creating node 'k3d-jblCluster2-worker-5'     
INFO[0006] Creating node 'k3d-jblCluster2-worker-6'     
INFO[0006] Creating node 'k3d-jblCluster2-worker-7'     
INFO[0007] Creating node 'k3d-jblCluster2-worker-8'     
INFO[0008] Creating LoadBalancer 'k3d-jblCluster2-masterlb' 
INFO[0012] Cluster 'jblCluster2' created successfully!  
INFO[0012] You can now use it like this:                
export KUBECONFIG=$(k3d get kubeconfig jblCluster2)
kubectl cluster-info
jbl@pegasusio:~$ rm $KUBECONFIG 
jbl@pegasusio:~$ export KUBECONFIG=$(k3d get kubeconfig jblCluster2)
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     0s
traefik              LoadBalancer   10.43.5.43     <pending>     80:30986/TCP,443:32267/TCP   0s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2s
traefik              LoadBalancer   10.43.5.43     <pending>     80:30986/TCP,443:32267/TCP   2s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     3s
traefik              LoadBalancer   10.43.5.43     <pending>     80:30986/TCP,443:32267/TCP   3s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     22s
traefik              LoadBalancer   10.43.5.43     172.18.0.2    80:30986/TCP,443:32267/TCP   22s
jbl@pegasusio:~$ kubectl apply -f ./k3s/flannel/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
jbl@pegasusio:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
jbl@pegasusio:~$ kubectl apply -f ./k3s/metallb/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created
jbl@pegasusio:~$ kubectl apply -f ./k3s/metallb/metallb.configmap.yaml
configmap/config created
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m36s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m36s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m38s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m38s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m39s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m39s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m40s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m40s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m41s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m41s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m42s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m42s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m43s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m43s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m44s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m44s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m45s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m45s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m46s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m46s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m47s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m47s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m49s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m49s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m50s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m50s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m51s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m51s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m52s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m52s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m53s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m53s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>        9100/TCP                     2m54s
traefik              LoadBalancer   10.43.5.43     172.18.0.4    80:30986/TCP,443:32267/TCP   2m54s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>         9100/TCP                     2m58s
traefik              LoadBalancer   10.43.5.43     172.18.0.120   80:30986/TCP,443:32267/TCP   2m58s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>         9100/TCP                     2m59s
traefik              LoadBalancer   10.43.5.43     172.18.0.120   80:30986/TCP,443:32267/TCP   2m59s
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.35.85    <none>         9100/TCP                     3m1s
traefik              LoadBalancer   10.43.5.43     172.18.0.120   80:30986/TCP,443:32267/TCP   3m1s
jbl@pegasusio:~$ ping -c 4 172.18.0.120
PING 172.18.0.120 (172.18.0.120) 56(84) bytes of data.
From 172.18.0.8: icmp_seq=2 Redirect Host(New nexthop: 172.18.0.120)
From 172.18.0.8: icmp_seq=3 Redirect Host(New nexthop: 172.18.0.120)
From 172.18.0.8: icmp_seq=4 Redirect Host(New nexthop: 172.18.0.120)

--- 172.18.0.120 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3029ms

jbl@pegasusio:~$ ping -c 4 172.18.0.4
PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data.
64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 172.18.0.4: icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from 172.18.0.4: icmp_seq=3 ttl=64 time=0.079 ms
64 bytes from 172.18.0.4: icmp_seq=4 ttl=64 time=0.055 ms

--- 172.18.0.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3078ms
rtt min/avg/max/mdev = 0.055/0.070/0.083/0.015 ms
jbl@pegasusio:~$ ping -c 4 172.18.0.120
PING 172.18.0.120 (172.18.0.120) 56(84) bytes of data.
From 172.18.0.8: icmp_seq=2 Redirect Host(New nexthop: 172.18.0.120)
From 172.18.0.8: icmp_seq=3 Redirect Host(New nexthop: 172.18.0.120)
From 172.18.0.8: icmp_seq=4 Redirect Host(New nexthop: 172.18.0.120)

--- 172.18.0.120 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3019ms

jbl@pegasusio:~$ curl hyyp://172.18.0.120
curl: (1) Protocol "hyyp" not supported or disabled in libcurl
jbl@pegasusio:~$ curl http://172.18.0.120
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ kubectl create namespace cheese 
namespace/cheese created
jbl@pegasusio:~$ 
jbl@pegasusio:~$ ls cheesie/examples/k8s/cheese-*.yaml > cheese.deploy.list
jbl@pegasusio:~$ 
jbl@pegasusio:~$ cat cheese.deploy.list |  while IFS=" " read manifest; do kubectl apply -f "$manifest"; done
ingress.extensions/cheese-default created
deployment.apps/stilton created
deployment.apps/cheddar created
deployment.apps/wensleydale created
ingress.extensions/cheese created
service/stilton created
service/cheddar created
service/wensleydale created
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
curl: (7) Failed to connect to 172.18.0.120 port 80: Aucun chemin d'accès pour atteindre l'hôte cible
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.18.0.120/
curl: (7) Failed to connect to 172.18.0.120 port 80: Aucun chemin d'accès pour atteindre l'hôte cible
jbl@pegasusio:~$ curl http://172.18.0.120/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ 
Jean-Baptiste-Lasselle commented 4 years ago
jbl@pegasusio:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:57:b9:af brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:95:b8:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.34/24 brd 192.168.1.255 scope global dynamic enp0s8
       valid_lft 80927sec preferred_lft 80927sec
    inet6 2a01:cb04:49a:9500:a99a:6e78:eed0:6bc3/64 scope global temporary dynamic 
       valid_lft 1765sec preferred_lft 565sec
    inet6 2a01:cb04:49a:9500:a00:27ff:fe95:b835/64 scope global mngtmpaddr noprefixroute dynamic 
       valid_lft 1765sec preferred_lft 565sec
    inet6 fe80::a00:27ff:fe95:b835/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:d9:6e:b1:a1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
74: br-40a10d04658b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ed:c6:e1:6a brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.1/16 brd 172.20.255.255 scope global br-40a10d04658b
       valid_lft forever preferred_lft forever
    inet6 fe80::42:edff:fec6:e16a/64 scope link 
       valid_lft forever preferred_lft forever
jbl@pegasusio:~$ rm $KUBECONFIG 
jbl@pegasusio:~$ export KUBECONFIG=$(k3d get kubeconfig jblCluster3)
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>        9100/TCP                     25m
traefik              LoadBalancer   10.43.10.192   172.20.0.3    80:31374/TCP,443:32260/TCP   25m
jbl@pegasusio:~$ # So at http://172.18.0.120/ :  the previous cluster [jblClustrer2] still 
jbl@pegasusio:~$ # advertises his own IP for his traefik loadbalancer service
jbl@pegasusio:~$ curl http://172.18.0.120/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
jbl@pegasusio:~$ curl http://172.20.0.3/
404 page not found
jbl@pegasusio:~$ kubectl get ns
NAME              STATUS   AGE
default           Active   40m
kube-system       Active   40m
kube-public       Active   40m
kube-node-lease   Active   40m
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>        9100/TCP                     29m
traefik              LoadBalancer   10.43.10.192   172.20.0.4    80:31374/TCP,443:32260/TCP   29m
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>        9100/TCP                     30m
traefik              LoadBalancer   10.43.10.192   172.20.0.4    80:31374/TCP,443:32260/TCP   30m
jbl@pegasusio:~$ # sed -i "s#.18#.20#g" ./k3s/metallb/metallb.configmap.yaml
jbl@pegasusio:~$ cat ./k3s/metallb/metallb.configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.18.0.120-172.18.0.172 # Change the range here

jbl@pegasusio:~$ # sed -i "s#172.18#172.20#g" ./k3s/metallb/metallb.configmap.yaml
jbl@pegasusio:~$ sed -i "s#172.18#172.20#g" ./k3s/metallb/metallb.configmap.yaml
jbl@pegasusio:~$ cat ./k3s/metallb/metallb.configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.0.120-172.20.0.172 # Change the range here

jbl@pegasusio:~$ kubectl apply -f ./k3s/metallb/metallb.configmap.yaml
Error from server (NotFound): error when creating "./k3s/metallb/metallb.configmap.yaml": namespaces "metallb-system" not found
jbl@pegasusio:~$ kubectl create ns metallb-system
namespace/metallb-system created
jbl@pegasusio:~$ kubectl apply -f ./k3s/metallb/metallb.configmap.yaml
configmap/config created
jbl@pegasusio:~$ kubectl apply -f ./k3s/metallb/metallb.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/metallb-system configured
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>        9100/TCP                     31m
traefik              LoadBalancer   10.43.10.192   172.20.0.4    80:31374/TCP,443:32260/TCP   31m
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>         9100/TCP                     31m
traefik              LoadBalancer   10.43.10.192   172.20.0.120   80:31374/TCP,443:32260/TCP   31m
jbl@pegasusio:~$ kubectl get svc -n kube-system | grep traefik
traefik-prometheus   ClusterIP      10.43.2.38     <none>         9100/TCP                     31m
traefik              LoadBalancer   10.43.10.192   172.20.0.120   80:31374/TCP,443:32260/TCP   31m
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ kubectl create namespace cheese 
namespace/cheese created
jbl@pegasusio:~$ 
jbl@pegasusio:~$ ls cheesie/examples/k8s/cheese-*.yaml > cheese.deploy.list
jbl@pegasusio:~$ 
jbl@pegasusio:~$ cat cheese.deploy.list |  while IFS=" " read manifest; do kubectl apply -f "$manifest"; done
ingress.extensions/cheese-default created
deployment.apps/stilton created
deployment.apps/cheddar created
deployment.apps/wensleydale created
ingress.extensions/cheese created
service/stilton created
service/cheddar created
service/wensleydale created
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
curl: (7) Failed to connect to 172.20.0.120 port 80: Aucun chemin d'accès pour atteindre l'hôte cible
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
curl: (7) Failed to connect to 172.20.0.120 port 80: Aucun chemin d'accès pour atteindre l'hôte cible
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
404 page not found
jbl@pegasusio:~$ curl http://172.20.0.120/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.png) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 3em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Stilton</h1>
  </body>
</html>
Jean-Baptiste-Lasselle commented 4 years ago

The Load Balancer node filter in k3d

see https://k3d.io/usage/guides/exposing_services/#1-via-ingress we can expose one of the ports of the k3s load balancer through docker :

k3d cluster create --api-port 6550 -p 8081:80@loadbalancer --agents 2
jbl@pegasusio:~$ curl http://172.18.0.7/
404 page not found
jbl@pegasusio:~$ curl http://localhost:8989/
404 page not found
jbl@pegasusio:~$ 
k3d create cluster jblCluster --api-port 6550  -p 8989:80@loadbalancer --masters 1 --workers 9

export KUBECONFIG=$(k3d get kubeconfig jblCluster)
kubectl get all,nodes

git clone https://github.com/containous/traefik cheesie/ && cd cheesie/ && git checkout v1.7 && cd ../

kubectl apply -f cheesie/examples/k8s/traefik-deployment.yaml
kubectl apply -f cheesie/examples/k8s/traefik-rbac.yaml 
kubectl apply -f cheesie/examples/k8s/ui.yaml 

# And a simple app deployment to kubernetes there

sed -i "s#extensions/v1beta1#apps/v1#g" cheesie/examples/k8s/cheese-deployments.yaml
kubectl create namespace cheese 

ls cheesie/examples/k8s/cheese-*.yaml > cheese.deploy.list

cat cheese.deploy.list |  while IFS=" " read manifest; do kubectl apply -f "$manifest"; done
jbl@pegasusio:~$ curl http://wensleydale.minikube:8989/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 6em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Wensleydale</h1>
  </body>
</html>
jbl@pegasusio:~$ curl http://cheddar.minikube:8989/
<html>
  <head>
    <style>
      html { 
        background: url(./bg.jpg) no-repeat center center fixed; 
        -webkit-background-size: cover;
        -moz-background-size: cover;
        -o-background-size: cover;
        background-size: cover;
      }

      h1 {
        font-family: Arial, Helvetica, sans-serif;
        background: rgba(187, 187, 187, 0.5);
        width: 4em;
        padding: 0.5em 1em;
        margin: 1em;
      }
    </style>
  </head>
  <body>
    <h1>Cheddar</h1>
  </body>
</html>
jbl@pegasusio:~$ ping -c 4 wensleydale.minikube
PING wensleydale.minikube (192.168.1.34) 56(84) bytes of data.
64 bytes from wensleydale.minikube (192.168.1.34): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from wensleydale.minikube (192.168.1.34): icmp_seq=2 ttl=64 time=0.053 ms
64 bytes from wensleydale.minikube (192.168.1.34): icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from wensleydale.minikube (192.168.1.34): icmp_seq=4 ttl=64 time=0.052 ms

--- wensleydale.minikube ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3066ms
rtt min/avg/max/mdev = 0.038/0.053/0.070/0.012 ms
jbl@pegasusio:~$ ip addr|grep 168
    inet 192.168.1.34/24 brd 192.168.1.255 scope global dynamic enp0s8
jbl@pegasusio:~$