Closed gennady-voronkov closed 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
You most likely have ttl annotation set in your service which infoblox provider does not support external-dns.alpha.kubernetes.io/ttl
Can you check if its there?
/remove-lifecycle stale
Same problem here. No external-dns.alpha.kubernetes.io/ttl
present
This was happening to me using DigitalOcean provider. Removing the external-dns.alpha.kubernetes.io/ttl
seems to have an effect on that. Probably the diff check is not considering the TTL field? If the contributors could point to the right direction here on how to fix it we may be able to get a PR going.
Seems like this is the same issue as #1421 and #1959.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Haven't yet been able to dig too much deeper, but I'm hearing reports of an issue that match what is reported here using versions 0.8.0 and 0.10.0. No ttl annotation present here either.
/remove-lifecycle stale
@gennady-voronkov: I managed to reproduce the log message you mentioned about the removal of the duplicate. And I found that it appears only when there is another source with the same "key" which is made of DNS name, SetIdentifier and Targets.
The way the ExternalDNS collects the endpoints from Kubernetes is made of 2 steps:
And both of them are used together in main.go.
Now the question is how I managed to get 2 sources with the same "key". I used Red Hat OpenShift which makes a mirroring of any (only default ingress class actually) Ingress resource to OpenShift Route resource. So, using both (openshift-route
and ingress
) sources I saw 2 endpoints with the same "key":
time="2022-03-31T19:40:12Z" level=debug msg="Endpoints generated from OpenShift Route: openshift-operator-lifecycle-manager/demo-f7fkj: [demo.dronskm.io 0 IN CNAME router-default.apps-crc.testing []]"
...
time="2022-03-31T19:40:12Z" level=debug msg="Endpoints generated from ingress: openshift-operator-lifecycle-manager/demo: [demo.dronskm.io 0 IN CNAME router-default.apps-crc.testing []]"
time="2022-03-31T19:40:12Z" level=debug msg="Removing duplicate endpoint demo.dronskm.io 0 IN CNAME router-default.apps-crc.testing []"
As you can see the "keys" (everything in between []
) for both endpoints are the same regardless they are from different sources.
After descoping the ingress from the mirroring I ended up with the single endpoint coming from the ingress and the log message disappeared:
...
time="2022-03-31T19:41:12Z" level=debug msg="Endpoints generated from OpenShift Route: openshift-console/console: [console-openshift-console.apps-crc.testing 0 IN CNAME router-default.apps-crc.testing []]"
time="2022-03-31T19:41:12Z" level=debug msg="Endpoints generated from ingress: openshift-operator-lifecycle-manager/demo: [demo.dronskm.io 0 IN CNAME router-default.apps-crc.testing []]"
time="2022-03-31T19:41:12Z" level=debug msg="ignoring record downloads-openshift-console.apps-crc.testing that does not match domain filter"
So, ExternalDNS made the right decision to remove a duplicate. Note also that the CNAME DNS record wasn't removed in Infoblox, it remained the same since the first time I ran ExternalDNS on my cluster. The log message is about the removal of the duplicate not about the removal of the DNS record.
You should check if you don't have a similar situation of multiple endpoints with the same key. I think it can happen quite easily: same hostname annotation on different services/ingresses, same host
on different ingresses, OpenShift, etc.
I am experiencing the same problem, records are being constantly deleted and recreated.
Container Arguments:
- --source=service
- --source=ingress
- --domain-filter=app.acme.local
- --provider=infoblox
- --infoblox-grid-host=XX.XX.XX.XX
- --infoblox-wapi-port=443
- --infoblox-wapi-version=2.3.1
- --log-level=debug
- --txt-owner-id=clu06
- --no-infoblox-ssl-verify
- --registry=txt
- --events
- --policy=sync
I am able to reproduce this issue by running version 0.8.0
I am running several clusters and have tagged all clusters with a unique txt-owner-id
.
Here are the logs for version: 0.8.0 (sample output)
time="2022-04-XXT13:03:48Z" level=info msg="Deleting A record named 'app-one.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:49Z" level=info msg="Deleting A record named 'app-two.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:49Z" level=info msg="Deleting A record named 'app-three.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:50Z" level=info msg="Deleting TXT record named 'app-one.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:50Z" level=info msg="Deleting TXT record named 'app-two.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:50Z" level=info msg="Deleting TXT record named 'app-three.app.acme.local' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:50Z" level=info msg="Creating A record named 'app-one.app.acme.local' to 'XX.XX.XX.XX' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:51Z" level=info msg="Creating A record named 'app-two.app.acme.local' to 'XX.XX.XX.XX' for Infoblox DNS zone 'app.acme.local'."
time="2022-04-XXT13:03:51Z" level=info msg="Creating A record named 'app-three.app.acme.local' to 'XX.XX.XX.XX' for Infoblox DNS zone 'app.acme.local'."
Here are the logs for version: 0.7.4
time="2022-04-19T08:47:10Z" level=debug msg="Skipping endpoint 5fcfd54d7f-lnl2c.app.acme.local 0 IN A 172.29.10.7 [] because owner id does not match, found: \"\", required: \"clu06\""
time="2022-04-19T08:47:10Z" level=debug msg="Skipping endpoint 55kh5.app.acme.local 0 IN A 172.29.10.11 [] because owner id does not match, found: \"\", required: \"clu06\""
time="2022-04-19T08:47:10Z" level=debug msg="Skipping endpoint iodnfoi44.app.acme.local 0 IN A 172.29.10.130 [] because owner id does not match, found: \"\", required: \"clu06\""
Version 0.7.4 fetches records, occassionally removes a duplicate endpoint but mostly skips records where as version 0.8.0 constantly flaps and deletes and creates records.
Duplicate issue:
EDIT: same behaviour in v0.11.0
Any updates here? Experiencing the same issue.
We are able to see the same issue in our internal testing. @skudriavtsev is looking into this and a possible fix
@skudriavtsev raised a PR for the same - https://github.com/kubernetes-sigs/external-dns/pull/2755 which is under review
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What happened: ExternalDNS deleting and then creating records. Constantly. Infoblox
What you expected to happen: it should be add once, if ingress resource delete from k8s, it should be deleted from dns then.
How to reproduce it (as minimally and precisely as possible): I used next args: interval: "1m" logLevel: debug logFormat: text policy: upsert-only registry: "txt" txtPrefix: "ing" txtSuffix: "" txtOwnerId: "kcc-ing"
Anything else we need to know?:
Environment:
External-DNS version (use
external-dns --version
): 0.8.0 it reprodicible on early version as wellDNS provider: infoblox
Others: logs: time="2021-08-05T08:42:31Z" level=debug msg="Endpoints generated from ingress: test/demo: [demo.test..com 0 IN A 10.10.10.10 [] demo.test..com 0 IN A 10.10.10.10 []]" time="2021-08-05T08:42:31Z" level=debug msg="Removing duplicate endpoint demo.test.***.com 0 IN A 10.10.10.10 []"