MoJo2600 / pihole-kubernetes

PiHole on kubernetes
498 stars 173 forks source link

pihole-FTL: no process found #223

Closed necrogami closed 2 years ago

necrogami commented 2 years ago

So i recently tried installing this via a helm chart

helm install my-pihole mojo2600/pihole -values ../k8/pihole.values.yml --version 2.5.8

This is my values file:

dnsmasq:
  customDnsEntries:
    - address=system/town/192.168.18.1

persistentVolumeClaim:
  enabled: true

serviceWeb:
  loadBalancerIP: 192.168.18.20
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

serviceDns:
  loadBalancerIP: 192.168.18.20
  annotations:
    metallb.universe.tf/allow-shared-ip: pihole-svc
  type: LoadBalancer

When i kubectl logs my-pihole -f it keeps spitting this out and repeating. Any idea why?

Starting pihole-FTL (no-daemon) as pihole
Stopping pihole-FTL
pihole-FTL: no process found
Starting pihole-FTL (no-daemon) as pihole
Stopping pihole-FTL
pihole-FTL: no process found
necrogami commented 2 years ago
╰─ff kubectl describe pod my-pihole
Name:         my-pihole-c79597f87-v6xns
Namespace:    default
Priority:     0
Node:         k3-worker-5.local/192.168.17.6
Start Time:   Sat, 02 Apr 2022 21:58:58 -0400
Labels:       app=pihole
              pod-template-hash=c79597f87
              release=my-pihole
Annotations:  checksum.config.adlists: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
              checksum.config.blacklist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
              checksum.config.dnsmasqConfig: a61b02e4c8222486f9cb94a5b598881fb4ccdeb201e3b47b7aab25d257df3ad
              checksum.config.regex: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
              checksum.config.staticDhcpConfig: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
              checksum.config.whitelist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546
Status:       Running
IP:           10.42.5.9
IPs:
  IP:           10.42.5.9
Controlled By:  ReplicaSet/my-pihole-c79597f87
Containers:
  pihole:
    Container ID:   containerd://f2c333c73546dd60375f98c402ce43a6f2c996bf167b74881ff1cf44b7d4f07d
    Image:          pihole/pihole:2022.02.1
    Image ID:       docker.io/pihole/pihole@sha256:60a9127372b0f7bb4b5eb09bc95e2735eb7b237999acf4bb079eb14b0f14632e
    Ports:          80/TCP, 53/TCP, 53/UDP, 443/TCP, 67/UDP
    Host Ports:     0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP
    State:          Running
      Started:      Sat, 02 Apr 2022 21:58:59 -0400
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=10
    Readiness:      http-get http://:http/admin.index.php delay=60s timeout=5s period=10s #success=1 #failure=3
    Environment:
      WEB_PORT:      80
      VIRTUAL_HOST:  pi.hole
      WEBPASSWORD:   <set to the key 'password' in secret 'my-pihole-password'>  Optional: false
      PIHOLE_DNS_:   8.8.8.8;8.8.4.4
    Mounts:
      /etc/addn-hosts from custom-dnsmasq (rw,path="addn-hosts")
      /etc/dnsmasq.d/02-custom.conf from custom-dnsmasq (rw,path="02-custom.conf")
      /etc/dnsmasq.d/05-pihole-custom-cname.conf from custom-dnsmasq (rw,path="05-pihole-custom-cname.conf")
      /etc/pihole from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5mfhr (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-pihole
    ReadOnly:   false
  custom-dnsmasq:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-pihole-custom-dnsmasq
    Optional:  false
  kube-api-access-5mfhr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  5m57s  default-scheduler  Successfully assigned default/my-pihole-c79597f87-v6xns to k3-worker-5.local
  Normal  Pulled     5m56s  kubelet            Container image "pihole/pihole:2022.02.1" already present on machine
  Normal  Created    5m56s  kubelet            Created container pihole
  Normal  Started    5m56s  kubelet            Started container pihole
adalric commented 2 years ago

What was the fix for this?

Tchoupinax commented 2 years ago

What was the fix for this?

I get the same issue and I solved it by taking a more recent image that the default. This issue was known and certainly fixed with an update. :)