Closed aravindhkudiyarasan closed 4 weeks ago
Same issue.
Any update on this bug ? We are using external-dns and external-dns is constantly deleting and re-adding A records of the ingress objects in the newer version.
We're experiencing a similar issue after updating from 0.13.4 => 0.14.0, but only with a subset of records. A common thread among the domains external-dns misbehaves on, is that they all have NS records that are not managed by external-dns.
Log excerpt with example values:
time="2023-11-20T15:58:29Z" level=info msg="Change zone: domain-one-com batch #0"
time="2023-11-20T15:58:29Z" level=info msg="Del records: domain-one.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:29Z" level=info msg="Del records: domain-one.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-one.com\"] 300"
time="2023-11-20T15:58:29Z" level=info msg="Add records: domain-one.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:29Z" level=info msg="Add records: domain-one.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-one.com\"] 300"
time="2023-11-20T15:58:31Z" level=info msg="Change zone: domain-two-com batch #0"
time="2023-11-20T15:58:31Z" level=info msg="Del records: domain-two.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:31Z" level=info msg="Del records: domain-two.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-two\"] 300"
time="2023-11-20T15:58:31Z" level=info msg="Add records: domain-two.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:31Z" level=info msg="Add records: domain-two.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-two\"] 300"
time="2023-11-20T15:58:32Z" level=info msg="Change zone: domain-three-com batch #0"
time="2023-11-20T15:58:32Z" level=info msg="Del records: domain-three.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:32Z" level=info msg="Del records: domain-three.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-three\"] 300"
time="2023-11-20T15:58:32Z" level=info msg="Add records: domain-three.com. A [70.203.58.243] 300"
time="2023-11-20T15:58:32Z" level=info msg="Add records: domain-three.com. TXT [\"heritage=external-dns,external-dns/owner=dns-frontend-prod-428b8056,external-dns/resource=ingress/traefik-ingresses/traefik-apps-domain-three\"] 300"
I had similar issue w/ this and I added the flag --txt-cache-interval=1h
and it fixed the issue. Give it a try and see?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This is still an issue /remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened:
external-dns is constantly deleting and re-adding A records of the ingress objects, even though there are no changes made. Only one instance of external-dns is running on EKS and managing the DNS records.
What you expected to happen:
No changes are made to the existing DNS records.
How to reproduce it (as minimally and precisely as possible):
Add following annotations in EKS service with type Loadbalancer.
Anything else we need to know?:
Environment:
External-DNS version (use external-dns --version): registry.k8s.io/external-dns/external-dns:v0.13.6, Helm chart 1.13.1. DNS provider: Route 53