Closed eXodus1440 closed 2 months ago
On further troubleshooting, Pi-hole triggers a restart (SIGTERM
) when adding or deleting records via the web interface as well - once per add/delete event.
The issue now seems more related to external-dns being unable to track records created in Pi-hole from DNSEndpoint
resources, and as such, deletes & re-adds entries every x
interval (1m0s
by default) - once per DNSEndpoint
resource.
Currently using a phantom Ingress
resource as a workaround, rather than via DNSEndpoint
- example below
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: nginx-ingress-http
spec:
entryPoints:
- web
routes:
- match: Host(`nginx-test.example.io`)
kind: Rule
services:
- name: nginx
port: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: nginx-test.example.io
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Do you use the registry? Can you share how you start external-dns? Can you check if kubelet tries to terminate external-dns? Someone sends a sigterm and I don't see from what you shared that this is an external-dns issue.
I was experiencing the same problem but I assumed that it was because pi-hole got overwhelmed and crashed the FTL service. How can I provide more information?
This is how I added it:
https://github.com/BoKKeR/flux-cluster/commit/1fc1c863f6c2d7df1c4b94c75c5f6ce47d37ae40
I have posted a workaround in this thread.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened: Declaring
DNSEndpoint
resources while using--provider=pihole
causes Pi-hole to continually reboot (SIGTERM
)pi-hole logs from
kubectl logs pihole-8669bb59d7-g4t6z --follow
:pi-hole gdb debug with
kubectl exec -it pihole-8669bb59d7-g4t6z -- /bin/sh
:external-dns logs from
kubectl logs external-dns-5fdf764c95-bnlfh --follow
:When following logs from both external-dns & pi-hole containers in 2 separate windows,
Stopping pihole-FTL
occurs for each external-dnsdelete
oradd
event. Full cycle occurs every 60sec, but that's in line with external-dns' default--interval=1m0s
What you expected to happen: A/CNAME record creation without continually triggering a reboot (
SIGTERM
) of the pihole-FTL process.How to reproduce it (as minimally and precisely as possible):
pi-hole config
```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: pihole-configmap data: TZ: "Europe/London" ADMIN_EMAIL: "admin@example.io" PIHOLE_DNS_: "8.8.8.8;8.8.4.4" VIRTUAL_HOST: pihole.example.io DNSMASQ_LISTENING: all --- apiVersion: v1 kind: Secret metadata: name: pihole-secret type: Opaque data: WEBPASSWORD: U3VwZXJTZWNyZXRQYXNzd29yZA== #SuperSecretPassword --- apiVersion: apps/v1 kind: Deployment metadata: name: pihole labels: app: pihole spec: replicas: 1 selector: matchLabels: app: pihole template: metadata: labels: app: pihole spec: containers: - name: pihole image: docker.io/pihole/pihole:2023.02.2 securityContext: capabilities: add: - SYS_PTRACE #Allowing gdb to debug the pihole-FTL process envFrom: - configMapRef: name: pihole-configmap - secretRef: name: pihole-secret ports: - name: pihole-dns-udp containerPort: 53 protocol: UDP - name: pihole-dns-tcp containerPort: 53 protocol: TCP - name: pihole-web containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: pihole annotations: external-dns.alpha.kubernetes.io/hostname: pihole.example.io spec: selector: app: pihole ports: - name: pihole-web port: 80 targetPort: 80 protocol: TCP - name: pihole-dns-tcp port: 53 targetPort: 53 protocol: TCP - name: pihole-dns-udp port: 53 targetPort: 53 protocol: UDP externalTrafficPolicy: Local type: LoadBalancer loadBalancerIP: 172.16.0.18 --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: pihole.example.io spec: secretName: pihole.example.io dnsNames: - pihole.example.io issuerRef: name: letsencrypt-prod kind: ClusterIssuer --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: pihole-ingress-https spec: entryPoints: - websecure routes: - match: Host(`pihole.example.io`) kind: Rule services: - name: pihole port: 80 tls: secretName: pihole.example.io --- ```external-dns config
```yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] - apiGroups: ["externaldns.k8s.io"] resources: ["dnsendpoints"] verbs: ["get","watch","list"] - apiGroups: ["externaldns.k8s.io"] resources: ["dnsendpoints/status"] verbs: ["get","update","patch","delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.13.2 env: - name: EXTERNAL_DNS_PIHOLE_PASSWORD valueFrom: secretKeyRef: name: pihole-secret key: WEBPASSWORD args: - --source=service - --source=ingress - --source=crd - --registry=noop - --policy=upsert-only - --provider=pihole - --pihole-server=http://pihole.default.svc.cluster.local - --log-level=debug securityContext: fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes token files ```external-dns crd-manifest
dnsendpoint resource
```yaml --- apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: example-a-record spec: endpoints: - dnsName: a.example.io recordTTL: 180 recordType: A targets: - 172.16.0.15 --- apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: example-cname-record spec: endpoints: - dnsName: cname.example.io recordTTL: 180 recordType: CNAME targets: - a.example.io ```Anything else we need to know?:
Environment:
kubectl exec -it external-dns-5fdf764c95-bnlfh -- external-dns --version
returns a blank line but pullingv0.13.2
tagged container from registry.k8s.ioAlso using metallb version
v0.13.9
as theLoadBalancer
provider