kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.69k stars 2.56k forks source link

TXT registry with apex records #692

Closed carlpett closed 4 years ago

carlpett commented 6 years ago

We're having an issue with external-dns not creating the apex record for our zone due to there already being other TXT records on it. This is a pretty common case, with many applications and protocols working via defining a TXT record on the zone root (SPF, site ownership checks, etc). Currently, external-dns bails out with these logs:

time="2018-08-27T09:27:04Z" level=debug msg="Skipping endpoint our-domain.tld 0 IN A 1.2.3.4 because owner id does not match, found: \"\", required: \"default\""
time="2018-08-27T09:27:04Z" level=debug msg="Skipping endpoint our-domain.tld 300 IN TXT v=spf1 include:spf.protection.outlook.com -all because owner id does not match, found: \"\", required: \"default\""

Adding the required ownership record manually also does not seem work (it occurs to me I may just have gotten unlucky in testing and it sometimes would have worked, with returned order being random?)

Additionally, setting --txt-prefix does not appear to have an effect on apex records? It still seems to read from the apex TXT record, rather than myprefix.our-domain.tld.

jwatte commented 6 years ago

FWIW, I'm trying to find a work-around for this too. I want my website example.com and www.example.com to both be handled by a server running in my kube cluster. I use external-dns to manage DNS for the cluster. (The cluster in turn runs in AWS.) But my email domain example.com is of course protected by an SPF TXT record. As a work-around, I can manually jam the right A record into DNS, but as soon as something needs updating, this will be fragile and fall apart.

The-Loeki commented 6 years ago

duplicate of #449

Nuxij commented 6 years ago

I see this issue the other way. I am such a huge fan of external-dns specifically because it uses A records, allowing easy updates for the apex!

Now I want to add an SPF to my existing external-dns TXT record, but it doesn't like to share. I have some ideas:

Edit: I wasn't aware of --txt-prefix option. Shame it doesn't work? That should basically solve this!

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

jwatte commented 5 years ago

/lifecycle active

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

frittentheke commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

frittentheke commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

frittentheke commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

itskingori commented 4 years ago

/remove-lifecycle stale

seanmalloy commented 4 years ago

/kind bug /triage duplicate

As previously mentioned this might be a duplicate of #449. Might want to consider closing this one in the future.

itskingori commented 4 years ago

/close

k8s-ci-robot commented 4 years ago

@itskingori: You can't close an active issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/external-dns/issues/692#issuecomment-673979558): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
seanmalloy commented 4 years ago

/close

k8s-ci-robot commented 4 years ago

@seanmalloy: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/external-dns/issues/692#issuecomment-674067595): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.