kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.75k stars 2.58k forks source link

CloudFlare provider continuously recreates apex records #4720

Open BrianHicks opened 2 months ago

BrianHicks commented 2 months ago

What happened: I have external-dns configured to source from ingresses. This works wonderfully, and all the records get created. However, it updates records for the domain apexes on every run, saying that no hosted zone matches.

What you expected to happen: I expected that external-dns would not change any records that already exist and are in the correct state without filtering out apex-level A records.

How to reproduce it (as minimally and precisely as possible):

The minimum I can find is adding an annotation pointing to a bare domain, then running external-dns in the following configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
  namespace: external-dns
spec:
  selector:
    matchLabels:
      app: external-dns
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      containers:
      - args:
        - --source=ingress
        - --provider=cloudflare
        - --cloudflare-proxied
        - --cloudflare-dns-records-per-page=5000
        - --log-level=trace
        env:
        - name: CF_API_TOKEN
          valueFrom:
            secretKeyRef:
              key: apiToken
              name: cloudflare-api-key-ae738f46
        image: registry.k8s.io/external-dns/external-dns:v0.14.2
        name: external-dns
      serviceAccount: external-dns

Each update cycle looks like this (changing my domain to be example.com, for clarity):

time="2024-09-03T18:51:11Z" level=debug msg="no zoneIDFilter configured, looking at all zones"                                                                                                                     time="2024-09-03T18:51:11Z" level=debug msg="Skipping record example.com because no hosted zone matching record DNS Name was detected"
time="2024-09-03T18:51:11Z" level=info msg="Changing record." action=UPDATE record=example.com ttl=1 type=A zone=a424b0c212d2d7999b56932ce53a77fb
time="2024-09-03T18:51:12Z" level=info msg="Changing record." action=UPDATE record=example.com ttl=1 type=A zone=a424b0c212d2d7999b56932ce53a77fb
time="2024-09-03T18:51:12Z" level=info msg="Changing record." action=UPDATE record=example.com ttl=1 type=A zone=a424b0c212d2d7999b56932ce53a77fb
time="2024-09-03T18:51:12Z" level=info msg="Changing record." action=UPDATE record=example.com ttl=1 type=TXT zone=a424b0c212d2d7999b56932ce53a77fb

(Worth noting that I do expect to have three IPs for each name; that's fine.)

Anything else we need to know?: Nope, I don't think so!

Environment:

darren-recentive commented 2 months ago

I've been around this repository long enough and this crops up time-to-time :) If you're utilizing ingress-nginx there's a known bug, this might help https://github.com/kubernetes-sigs/external-dns/issues/3799#issuecomment-1804400073

If you don't already, I'd set a Deployment strategy to avoid duplicate Pods or switch to ReplicaSet.

BrianHicks commented 2 months ago

Ah, no, I’m using Traefik. Good to know, though!

BrianHicks commented 2 months ago

oh, that second part of your comment didn't come in the email. Where would you recommend setting that configuration?

darren-recentive commented 2 months ago

oh, that second part of your comment didn't come in the email. Where would you recommend setting that configuration?

Set the Deployment strategy to Recreate to avoid duplicate Pods, see this repositories' helm chart configuration for example.

BrianHicks commented 2 months ago

oh, I see. It's already set to that. This doesn't seem to be a Kubernetes object thing, but the behavior of external-dns within the pod.

darren-recentive commented 2 months ago

oh, I see. It's already set to that. This doesn't seem to be a Kubernetes object thing, but the behavior of external-dns within the pod.

That's 1 less thing to worry about. Now I had presumed you were utilizing ingress-nginx, but you're utilizing traefik - is it this Helm chart perhaps https://github.com/traefik/traefik-helm-chart?

Some findings that may be helpful:

relevant in that both Issue Authors are using traefik for their Ingress Controllers and are seeing similar symptoms of recreated A records.

I'm not utilizing traefik or enternal-dns for managing A records, so I can't reproduce, hope this helps though! :)