Closed tekenny closed 4 years ago
Can you share MetalLB ConfigMap and services descriptions?
I don't know anything about MetalLB, is that needed?
I just followed the instructions in Readme.md and deployed pihole on my Raspberry Pi k3s cluster using the attached YAML
Also noticed the following but can't find that yaml: rpi4:~/src $ kubectl get configmaps -n pihole NAME DATA AGE pihole-custom-dnsmasq 2 2d
There are some task needed, I recommend you the following tutorial:
https://kauri.io/build-your-very-own-self-hosting-platform-with-raspberry-pi-and-kubernetes/
Thanks, it looks like that may have the details I need!
@i5Js
There are some task needed, I recommend you the following tutorial:
https://kauri.io/build-your-very-own-self-hosting-platform-with-raspberry-pi-and-kubernetes/
Site seems to have 404'd, any mirror?
@tekenny
Thanks, it looks like that may have the details I need!
Any details as to what might've fixed your issue?
@jorp it seems the host is down, you should wait.
@jorp having same problem and desperately waiting for kauri.io to come up :)
@jorp for me traefik included in k3s was causing troubles. After removal of traefik deployment I have installed pihole and it worked fine, with no TCP pending issue more info https://github.com/colin-mccarthy/k3s-pi-hole
but, I have no idea yet, what are consequences of traefik removal :)
@taikoThe You need to install metalb if you don't want to install traefik
@taikoThe You need to install metalb if you don't want to install traefik
@i5Js I see, but won't I have same problem the as with traefik installed ? Would you please mind to clarify why do I need either of them ?
The pihole deployment will create a kubernetes service for you. A kubernetes service is just a abstract way to expose an application. Kubernetes by default has no way to create a 'physical' connection to the world outside of the cluster. This is where metallb or treafik, etc comes into play. Metallb for example will see that a service of type LoadBalancer
is created and will create a virtual loadbalancer for you, which exposes the service on a given IP address. Metallb will handle the routing from the outside to the pod inside the cluster.
There is another way, and I think this is what colin mccarthy is doing. You can expose a port directly on the host machine. But as Pi-hole is using very low level ports you will most likely run in to trouble as those ports are most of the time used by other services (80, 443, 53 TCP / UDP). Exposing them on a separate IP is the better solution.
The pihole deployment will create a kubernetes service for you. A kubernetes service is just a abstract way to expose an application. Kubernetes by default has no way to create a 'physical' connection to the world outside of the cluster. This is where metallb or treafik, etc comes into play. Metallb for example will see that a service of type
LoadBalancer
is created and will create a virtual loadbalancer for you, which exposes the service on a given IP address. Metallb will handle the routing from the outside to the pod inside the cluster.There is another way, and I think this is what colin mccarthy is doing. You can expose a port directly on the host machine. But as Pi-hole is using very low level ports you will most likely run in to trouble as those ports are most of the time used by other services (80, 443, 53 TCP / UDP). Exposing them on a separate IP is the better solution.
but that was actually my problem. k3s is coming with traefik installed and occupying 80,443 and then pihole wasn't working, with tcp pods stuck pending
Then i would recommend to install metallb on the cluster and let metallb expose the ports on a different ip. I think metallb and treafik should be able to work side by side. I'm not sure if there is a way to tell treafik to ignore the pihole services and let metallb handle those. I have no experience with treafik.
I tried to install metallb with helm alongside, which itself was not a problem.
helm install metallb stable/metallb --namespace kube-system \ --set configInline.address-pools[0].name=default \ --set configInline.address-pools[0].protocol=layer2 \ --set configInline.address-pools[0].addresses[0]=192.168.0.240-192.168.0.250
I tried then to use a IP that is not already assigned to the cluster, the problem then was, that metallb and traefik both tried to set up a loadbalancer at the same time. So, MetalLB and traefik continuously fight over the ownership of services, deleting each other's IP allocations.
If you’re going to use metallb, you should uninstall traefik, since both are load balancers
Obtener Outlook para iOShttps://aka.ms/o0ukef
De: Dorian notifications@github.com Enviado: Friday, September 18, 2020 11:55:27 AM Para: MoJo2600/pihole-kubernetes pihole-kubernetes@noreply.github.com Cc: i5Js erjotas@gmail.com; Mention mention@noreply.github.com Asunto: Re: [MoJo2600/pihole-kubernetes] svclb-pihole-tcp pods stuck in pending status (#54)
I tried to install metallb with helm alongside, which itself was not a problem. helm install metallb stable/metallb --namespace kube-system \ --set configInline.address-pools[0].name=default \ --set configInline.address-pools[0].protocol=layer2 \ --set configInline.address-pools[0].addresses[0]=192.168.0.240-192.168.0.250 I tried then to use a IP that is not already assigned to the cluster, the problem then was, that metallb and traefik both tried to set up a loadbalancer at the same time. So, MetalLB and traefik continuously fight over the ownership of services, deleting each other's IP allocations.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/MoJo2600/pihole-kubernetes/issues/54#issuecomment-694774187, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AAIFKG5NPGPHUEEUHDYPPLDSGMVA7ANCNFSM4N7W46FQ.
I know it would work if i uninstall traefik, but i try to use both, since traefik is a Layer 7 LB and metallb Layer 4 they serve different purposes. Maybe there is a way to assign a IP range to Metallb and another to traefik.
I will try this tutorial https://www.disasterproject.com/kubernetes-with-external-dns/ and report back
Interesting.. I’m using Nginx as ingress & proxy load balancer.
On 18 Sep 2020, at 15:48, Dorian notifications@github.com wrote:
I will try this tutorial https://www.disasterproject.com/kubernetes-with-external-dns/ <x-msg://55/url> and report back
Ok, it seems to work, however, I had to do the following:
New installation:
pi@k3s-master:~ $ export K3S_KUBECONFIG_MODE="644"
pi@k3s-master:~ $ export INSTALL_K3S_EXEC=" --disable servicelb"
pi@k3s-master:~ $ curl -sfL https://get.k3s.io | sh -
-> The problem in fact is servicelb not traefik as I understood. Like I mentioned traefik does Layer 7 Loadbalancing. so to use metallb instead of servicelb do the following:
configured a file metallb.yml
configInline:
address-pools:
- name: k3s-service-ips
protocol: layer2
addresses:
- 192.168.111.10-192.168.111.20
used a range which does not interferes with my DHCP
then I installed the metallb helm chart:
helm install -f metallb.yml --namespace=kube-system metallb stable/metallb
Then I created the Storage-Class nfs-client (there exists many tutorials about this topic):
created a pihole.yml
persistentVolumeClaim:
enabled: true
storageClass: nfs-client
ingress:
enabled: true
serviceTCP:
loadBalancerIP: '192.168.111.14'
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: pihole-svc
serviceUDP:
loadBalancerIP: '192.168.111.14'
type: LoadBalancer
annotations:
metallb.universe.tf/allow-shared-ip: pihole-svc
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
adminPassword: 123456789
Note the metallb part... this is important
Then youre good to go :+1:
However i have to admit, that right now, I cannot tell how to use for example both, an IP which is in control of metallb Layer 2 and traefik layer 7 together...
This is probably a config error on my part but would appreciate any help.
All three pods for svclb-pihole-tcp are stuck in pending status.
Describe states: Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.
netstat --listen on my 3 nodes indicate nothing on ports 53, 80 nor 443.
What might I be missing or messed up? Thanks!