MoJo2600 / pihole-kubernetes

PiHole on kubernetes
498 stars 173 forks source link

Can't reliably get DNS resolution with multi-node deployment #243

Closed kenlasko closed 7 months ago

kenlasko commented 1 year ago

I have a 3-node K3S setup using Traefik for ingress, MetalLB for load balancing and Longhorn for shared storage. I installed Pihole via the latest MoJo2600 Helm chart with the following additional settings:

persistentVolumeClaim:
  enabled: true
  accessModes:
    - ReadWriteMany

ingress:
  enabled: true
  hosts:
    - pihole.mydomain.com
  annotations:
    kubernetes.io/ingress.class: traefik

adminPassword: asf982328374jofdshjkl23479r

serviceWeb:
  type: ClusterIP

serviceDns:
  loadBalancerIP: 192.168.1.100
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer
  externalTrafficPolicy: Local

serviceTCP:
  annotations:
    traefik.ingress.kubernetes.io/affinity: "true"
    traefik.ingress.kubernetes.io/session-cookie-name: "sticky"

DNS resolution only works when the MetalLB speaker pod that is on the same node as the Pihole pod responds to the network request. If I set the externalTrafficPolicy to Cluster, then it works fine, but then I don't see the originating IP address/hostname. I posted about this in the Discussion folder along with someone else. As far as I know, you can't guarantee which MetalLB speaker pod will respond to an IP address request, which means that running Pihole in a multi-node Kubernetes deployment won't work. At least this is how my current understanding works. Am I doing something fundamentally wrong? I'm pretty new at this Kubernetes stuff.

MoJo2600 commented 1 year ago

Hm... I'm not 100% sure, but I think the 'Local' setting is correct if you want to see the real IP address. But I'm not sure about the 'DNS resolution only works when the MetalLB speaker pod that is on the same node as the Pihole pod responds to the network request.' part. My understanding was, that in this setup you will always receive an answer from the one metallb pod on the node pihole is running on. I'm not sure why the other metallb pod respond to the query at all.

kenlasko commented 1 year ago

Yes, it would seem that Local is the correct way to go, unless you don't care about knowing which endpoint is querying what.

Here's how I believe things work: When you ping an IP address, your machine will send an ARP broadcast asking "who owns this IP address?". The owner of that address will respond with its MAC address. With MetalLB, one node will be elected to respond to that request with the MAC address of its host. If that host isn't currently running the Pihole pod, then it won't be able to service it since externalTrafficPolicy=Local prevents MetalLB from sending traffic outside the local node.

Right now, I can't use Pihole. My loadbalanced IP is 192.168.1.100. The MAC address in the ARP table is currently pointing to the host at 192.168.1.12, while Pihole is running on the host at 192.168.1.11.

image

If I remove the ARP entry for .100 via arp -d 192.168.1.100 and re-query until I randomly get the right MAC address, then Pihole works fine. It also works fine in any case if I set externalTrafficPolicy=Cluster because MetalLB can forward requests to other nodes, but you can't see the originating IP/hostname in Pihole.

I must be missing something. Is there a way to force MetalLB to always respond to ARP requests from the node that's currently running Pihole? Is there something else fundamental that I'm missing? I'm pretty new to Kubernetes, but have lots of experience with Docker/Docker-Compose.

i5Js commented 1 year ago

I am in the same page. I get the pihole working with traefik, but loosing the original ip/hostame. I have spent a lot of time trying to find a way to do it, but I think I am done.I will come back to nginx as reverse proxy, it works ok with my setup. If somebody has an idea I will be more than happy to deploy traefik again and give it a chance.

chrisbalmer commented 9 months ago

A couple options may be found in this other issue: https://github.com/MoJo2600/pihole-kubernetes/discussions/233#discussioncomment-7783486