Closed Jean-Baptiste-Lasselle closed 4 years ago
You mention that you use the kubectl proxy to access the traefik dashboard. The dashboard deployment should have assigned an IP to your host, which is traefik.pegasusio.io
via Metallb and DHCP. Are you editing your host file to assign the IPs or using something else to make sure the DNS queries resolves?
Edit: I see the ping replies but I wonder why you use the proxy to access traefik :)
Edit2: Everything resolves to the same IP, 192.168.1.21
which is wrong. If you look at the Kubernetes dashboard, you should be able to find the IPs metallb assigned to your applications.
oh hi @Thoorium I just finished completing writing the issue, I read and answer you now
@Thoorium
oh, and yes, absolutely, I edited my /etc/hosts
(note the 12.168.1.21
same address you see when I try ping cheeses.pegasusio.io
) :
$ cat /etc/hosts|grep pegasusio.io
192.168.1.21 dashboard.pegasusio.io pegasusio.io
192.168.1.21 oci-registry.pegasusio.io pegasusio.io
192.168.1.21 portus.pegasusio.io pegasusio.io
192.168.1.21 notary.pegasusio.io pegasusio.io
192.168.1.21 pegasusio.io
192.168.1.21 minikube.pegasusio.io pegasusio.io
192.168.1.21 minikube.pegasusio.io
192.168.1.21 traefik.pegasusio.io
192.168.1.21 gravitee-am.pegasusio.io
192.168.1.21 gravitee-apim.pegasusio.io
192.168.1.21 gravitee-am-ui.pegasusio.io
192.168.1.21 gravitee-apim-ui.pegasusio.io
192.168.1.21 stilton.pegasusio.io
192.168.1.21 cheddar.pegasusio.io
192.168.1.21 wensleydale.pegasusio.io
192.168.1.21 cheeses.pegasusio.io
192.168.1.21 minikube.pegasusio.io
Oh I see now. My setup relies on metallb assigning IPs to the deployments (IE: traefik) which is kinda the opposite of what you're doing, since you're working with only the host's IP. I can't test this right now but I think that modifying the traefik-deployment.yaml
service deployment type from LoadBalancer
to ClusterIp
should expose traefik via the host's IP.
kind: Service
apiVersion: v1
metadata:
name: traefik
namespace: traefik
annotations: {}
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
#- protocol: TCP
# port: 443
# name: https
type: ClusterIp # <-- Here
Modify the file, apply the deployment and look in the dashboard if the assigned IP is the one from your host.
@Thoorium oh my gosh so thak you for your help, I'll try that, and tell you tomorrow, than k you so much again ! ttyt
@Thoorium Note : I will gladly take your advice on what I shoudl read to get a lot more stronger on kubernetes networking, when finished with this issue
Ok, I just tried before sleep, With all three possibilities:
---
kind: Service
apiVersion: v1
metadata:
name: traefik
namespace: traefik
annotations: {}
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
#- protocol: TCP
# port: 443
# name: https
# type: LoadBalancer
# type: ClusterIP
type: NodePort
ClusterIP
: I was puzzled, but no, that is just about only internal access in the Kubernetes networkLoadBalancer
: that is what I want, to get my traefik ingress on 80
, and the dashboard, accessible trough this ingress. There I saw that I have a problem, I should get an ExternalIP
, but it just remains in pending
state forever : $ kubectl get service/traefik -n traefik -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.108.216.198 <pending> 80:31339/TCP 57s
pegasusio.io
, I ran this to launch a web app : jbl@pegasusio:~$ git clone https://github.com/scotty-c/docker-demo-webapp
Clonage dans 'docker-demo-webapp'...
remote: Enumerating objects: 21, done.
remote: Total 21 (delta 0), reused 0 (delta 0), pack-reused 21
Dépaquetage des objets: 100% (21/21), fait.
jbl@pegasusio:~$ cd docker-demo-webapp/
jbl@pegasusio:~/docker-demo-webapp$ docker build -t scottyc/webapp .
Sending build context to Docker daemon 82.94kB
Step 1/11 : FROM golang:1.11.2-alpine3.8 as build
1.11.2-alpine3.8: Pulling from library/golang
4fe2ade4980c: Pull complete
2e793f0ebe8a: Pull complete
77995fba1918: Pull complete
cacfaec3bb6b: Pull complete
885a921d7cd2: Pull complete
Digest: sha256:692eff58ac23cafc7cb099793feb00406146d187cd3ba0226809317952a9cf37
Status: Downloaded newer image for golang:1.11.2-alpine3.8
---> 57915f96905a
Step 2/11 : WORKDIR /go/src/github.com/scottyc/webapp
---> Running in 3443eb606f3d
Removing intermediate container 3443eb606f3d
---> f6e3e8718059
Step 3/11 : COPY web.go web.go
---> b6b57b30fae2
Step 4/11 : RUN CGO_ENABLED=0 GOOS=linux go build -o ./bin/webapp github.com/scottyc/webapp
---> Running in bdd0b27495aa
Removing intermediate container bdd0b27495aa
---> 2fc1b3a15be6
Step 5/11 : FROM alpine:3.8
3.8: Pulling from library/alpine
486039affc0a: Pull complete
Digest: sha256:2bb501e6173d9d006e56de5bce2720eb06396803300fe1687b58a7ff32bf4c14
Status: Downloaded newer image for alpine:3.8
---> c8bccc0af957
Step 6/11 : RUN apk add --update vim && rm -rf /var/cache/apk/* && mkdir -p /web/static/
---> Running in 2ec64683445a
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/5) Installing lua5.3-libs (5.3.5-r2)
(2/5) Installing ncurses-terminfo-base (6.1_p20180818-r1)
(3/5) Installing ncurses-terminfo (6.1_p20180818-r1)
(4/5) Installing ncurses-libs (6.1_p20180818-r1)
(5/5) Installing vim (8.1.1365-r0)
Executing busybox-1.28.4-r3.trigger
OK: 39 MiB in 18 packages
Removing intermediate container 2ec64683445a
---> 7ef7f0decf59
Step 7/11 : COPY --from=build /go/src/github.com/scottyc/webapp/bin/webapp /usr/bin
---> bb3acc66b2d1
Step 8/11 : COPY index.html /web/static/index.html
---> 55ea98882e8d
Step 9/11 : WORKDIR /web
---> Running in 862ff572c41e
Removing intermediate container 862ff572c41e
---> 8ddb19e0f0bb
Step 10/11 : EXPOSE 3000
---> Running in b1b5e1493133
Removing intermediate container b1b5e1493133
---> 702f45123981
Step 11/11 : ENTRYPOINT webapp
---> Running in f980cbfba6c7
Removing intermediate container f980cbfba6c7
---> 4ab677b3d78c
Successfully built 4ab677b3d78c
Successfully tagged scottyc/webapp:latest
jbl@pegasusio:~/docker-demo-webapp$ docker run -d --name webapp -p 3000:3000 scottyc/webapp
3683b93a662aa25f324bac87daf388f1d08e879a070ef9ad7b31b4ca0ce6c0d7
jbl@pegasusio:~/docker-demo-webapp$ docker logs -f webapp
Tomorrow I will quickly build a repo with a traefik 1.7 daemonset, which worked for me, my only problem with that deployement is that I could not configure it for https, but that 's another question.
I will also try your entire recipe by means of total despair, a least to get a working traefik.toml file that I tested.
I also eventually want to ask : is it me, or everybody is having a damn hard time trying to use Traefik ? (What was your experience, I mean, did you succeed on first try ? )
So thank you so much again for your and tty tomorrow
Kubernetes networking by itself can be quite challenging but thankfully, there is a lot of documentation and how-tos available to help with the challenges. Traefik adds another layer of complexity above this and requires knowledge on both levels to get working properly. When I first setup my cluster with Traefik, I was using Traefik v1 and the documentation for Kubernetes was...minimal. Thankfully a few brave souls (cited in my sources) figured some of the issues and I was able to piece everything together. At some point I moved to Traefik v2 and while the documentation was still rough, I was able to get it to work pretty fast. Now the Traefik Kubernetes documentation is a lot better.
Anyway, if I have the time tomorrow, I'll try to get Traefik to work without a LoadBalancer
using the ClusterIP
mode.
update :
LoadBalancer
Service (the one you try with ClusterIP
), it is because I use minkube, and minkube does not support LoadBalancer type, apparently.So I'll handle drifting to a more prod like K8s cluster for my home tests, but I am dying to ask you jst one question if that is okay :
What configuration do I have to apply to your recipe, so that i can deploy HTTPS apps ?
(thank you so much for making me able so far, to do what I already did, using Traefik v1.7)
For example, in my tests, I used a configmap to pass on a custom traefik config file, and I heard about the dynamic config new concept, I have a very hard time with it, but could not ever get it to let me serve https apps ...
Kubernetes without a LoadBalancer to provide IPs is not pretty. This is where metallb comes into play. Have you tried to apply my metallb configuration to your setup?
For HTTPS, I didn't try it but this should expose it via Traefik.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.0
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api.insecure
- --accesslog
- --providers.kubernetescrd
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
---
kind: Service
apiVersion: v1
metadata:
name: traefik
namespace: traefik
annotations: {}
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
---
kind: Service
apiVersion: v1
metadata:
name: traefik-dashboard
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 8080
name: dashboard
@Thoorium thank you so much !
couldnot resist before sleep : https://github.com/pokusio/the-traefik-path/releases/tag/0.0.1
LoadBalancer
, Kubernetes
assigns an Ingress to it, automatically, without mentioning explicitly that Ingress among the Kubernetes Ingresses... LoadBalancer
, actually becomes an Ingress, under the hood ...? Just really late thoughtsWhile Kubernetes can run everywhere, it was built for cloud services primarily. As such, IP provisioning is done via LoadBalancer
services which are external to Kubernetes and managed by the cloud providers. You can read a bit more here https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/. Since you are using a bare metal solution here, you need something to provide IPs to your services since you don't have an external load balancer to do the task. This is where MetalLB https://metallb.universe.tf/ comes to help. It is a LoadBalancer
implementation for bare metal Kubernetes installations.
Using my setup will probably solve all your issues at this point.
what I am number interestde in :
External IP
for a ClusterIP type service, because it is internal networking, not external (so pure flannel)For the first 3 points, my setup will be able to do that. MetalLB handles the IP stuff between the router and the nodes. So if a node goes down, everything should get reassigned automatically and available again via the same IP.
For the last point, I've tested my setup with Kube-router
instead of Flannel
but that's pretty much it. I haven't tested any other network setup.
hi @Thoorium ....(but you speak French :) ? Let's keep writing in English so non-French can get into the discussion ?)
kube-router
External IP
that actually routes into the kubernetes cluster, to a LoadBalancer
Service Type ? kube-router
?) , what do you think ?The work plan I have in mind :
hi @Thoorium :
kind: Service
of type LoadBalancer
, in Kubernetes, well you need an actual Load Balancer added to Kubernetes
: and that's the Load Balancer who "gives" an External IP
Kubernetes External DNS
, will only help configuring with domain names resolution in a cluster context, noting like attribution of External IPs, though related, we understand why.Metallb
will perfectly do, and If I am right here in what I understand of Kubernetes
, ten I'll say that in your case, you do, get a value under External IP
, if you create a kind: Service
of type LoadBalancer
, am I right ?External IP
you get, lives in the flannel networkhi again @Thoorium :
withdraw Flannel from youyr recipe, and guess what you are going to get ? Well you will get Externel IP from the local network you VMs are using. SO It's a bridged networking to underlay network, isn'it?
So now the most interesting setup is (reproducing AWS / Cloud Providers setup^inside datacenters) :
This way, we have network isolation, that acts as multiple successive airlocks
Any machine we choose, in our infrastructure :
Here above :
kubernetes
cluster, both using Metallb : LoadBalancer
type Services' External IP
, from the underlay network, connected to the outside worldwhere the two networks are connected, is where we have the VMs with an OpenVSwitch switch running inside, with both green and glowing pink network interfaces (connected to the OpenVSwitch switch). :
Kubernetes
, also with metalLBNodeAutoScaler
wich will create / delete VMs with either switches, or sdn controller, to sacle up or down, the Kuerbentes CLuster capacity, beyond pod density limit.We don't care on what machines switch and routers run (containers, VMs, or real hardware) : what matters is to what network interfaces they are connected
The setup in my schematics brings in a little bit of network redundancy, in case of a nuke.
What is funny though here, is that I am problably provision the OpenStack ... inside containers, using RDO
If you are interested, I would like to test every single assertion in https://www.youtube.com/watch?v=Ytc24Y0YrXE
example :
myClusterA
be a Kubernetes cluster, installed using [TODO: give exact reference to provisioning recipe] , having 6 nodes 3 masters, 3 workers, using k3s/k3d
myCLusterA
, I deployed metallB as LoadBalancer, directly bridged to my underlay networkTraefik v1.7
as IngressController
in myClusterA
, myFirstApp
, and myOtherApp
, in myCLusterA
, each endowed with an Ingress
for traefik, like https://github.com/containous/traefik/blob/v1.7/examples/k8s/cheese-ingress.yamlI do all this in a network that is behind a router, behind my home internet access router : that, so I can entirely manage my network IP address space, without having to reconfigure my home's ISP router
behing that router, my local network IP Address space will be 192.168.3.0/24, and when I will have two openstacks, I will use two networks with 192.167.0.0/16
and 192.168.0.0/16
IP address spaces for each openstack installation. I will connect those two networks to one root network, from which I'll operate the two others, a smaller network, where me, the super admin devops, works safely protected from the internet, behind firewalls, 192.169.0.0/24
.
Then answer those questions :
External IP
granted to the Ingress
, belongs ? (A master only ? Who creates that IP address, and gives it to that machine? How can those IP adresses be any other IP address, than the IP of one of hte Cluster nodes ? see flannel)Then test (automated tests) if :
Ingress
for myFirstApp
is listening on all cluster nodes (or only masters ?) (the app has an Ingress, but it kind: Service
is of type ClusterIP
, not LoadBalancer
.)Ingress
for myOtherApp
is listening on all cluster nodes (or only masters ?)Do same test, but this time, it is about Traefik v2
, and IngressRoutes
, instead of
I appreciate the interest but this is getting out of scope from the initial question. I would suggest that you create a new repository and document your findings/advancement there instead ;)
You can copy/paste this comment and I'll answer the questions there at the best of my abilities.
I agree :) : I'll be glad to collaborate with you here : https://github.com/pokusio/k3s-topgun
Hi, First thank you for sharing your work.
Basically, your repo is one of the most serious I found with a recipe to deploy traefik as ingress controller
I have tried a lot of different quick recipes, and I am having a very hard time finding just one configuration that would ever work
I would so be grateful if you could tell me what I did wrong, using your recipe :
I git clone your recipe, and in the root folder of your recipe, I executed exactly this :