kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.74k stars 2.58k forks source link

Allow multiple A records for the same domain from different external-dns instances. #1441

Open fore5fire opened 4 years ago

fore5fire commented 4 years ago

What would you like to be added: Allowing multiple A records to be set from different sources that are managed by different external-dns instances.

For some background, I'm trying to create A records from services of type LoadBalancer in different clusters, but it seems that currently (v0.6.0) the only way to specify multiple IP addresses for a single DNS name is to include them all as targets in a single DNSEndpoint, which is not an option when services are running in different clusters using different instances of external-dns. When I attempt to do this, only one of the records is created and then the logs report level=info msg="All records are already up to date" across all instances.

Why is this needed: Allowing multiple A records per domain allows for failover clusters with minimum configuration, and is especially useful in situations where inter-region load balancers aren't available, like with DigitalOcean or on-prem. The IP addresses for load balancers or ingresses are only available in their respective cluster, and cannot all be consolidated into a single DNSEndpoint resource without implementing custom automation that would require resource inspection permissions across clusters.

caviliar commented 4 years ago

I have the need for the same use case. but currently cannot find a workaround to get this behaviour to work.

If putting external-dns into debug then we get the message from the second external-dns instance that it cannot add the A record as it is not the owner.

If something like the TXT records value was keyed by the external-dns's txt-owner-id then it would be able to maintain and store the records associated with that cluster so that multiple external-dns instances from multiple clusters could all maintain records for x.foo.bar

jonpulsifer commented 4 years ago

I would also like to play with this at work, and at home.

This issue has been around for a while, @njuettner @Raffo do you have any thoughts?

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

seanmalloy commented 4 years ago

/remove-lifecycle stale

savitha-ping commented 4 years ago

Is this being considered? It is very useful to have in multi-region deployments (i.e. multiple Kubernetes clusters) when using service discovery protocols such as JGroups DNS_PING. Would appreciate adding this feature! :)

kespinola commented 4 years ago

I am also interested in this feature to help safely rollover traffic to a new cluster.

I would like external dns to run in both the current and incoming cluster and attach to their respective gateways (we are using Istio). The incoming cluster and current cluster should contribute to the same record in Route 53 but assign independent weights. For example, start with responding to 10% of DNS queries with the IP of the incoming Istio Ingress load balancer and the rest to the current load balancer. This requires the DNS provider to support weighted entries which Route 53 does but I'm not sure about others.

I am happy to help make this contribution if it is desired by the maintainers. I'd also love to hear other methods for achieving the same incremental rollout of services from one cluster to another.

rsaffi commented 3 years ago

One more use-case for this right here! :hand: And same reason: safe rollout of the service on different clusters (different providers, even), so multiple external-dns instances.

mamiu commented 3 years ago

We're looking for the same (multiple A records per domain) but for another purpose. We'd like to use Round-robin DNS in our cluster. Port 80 and 443 of every node is exposed to the public and can be used as an entry for all routes (handled by ingress-nginx, as described here).

Or is this already a feature that can be enabled via configuration?

povils commented 3 years ago

Same here: we have many short-lived clusters and external-dns seems would be a good fit to automate these DNS records for our API gateways

CRASH-Tech commented 3 years ago

Also need this feature

buzzsurfr commented 3 years ago

I'm looking at contributing to this issue (since I'm also interested in it), but wanted to discuss the experience before working on it.

I'm specifically focusing around the aws-sd provider (but will also test for the txt provider). When I created a new service in cluster-0 called nginx, the Cloud Map Service uses this for the Description field:

heritage=external-dns,external-dns/owner=cluster-0,external-dns/resource=service/default/nginx

Would it make sense to have an annotation on the k8s Service resource specifying it as a "shared" resource? That way, if both k8s clusters agree that the resource is shared, they will use a different behavior model and not overwrite each other's records (Cloud Map Service Instances).

For each record (Service Instance), I was thinking of adding Custom attributes for heritage, owner, and resource, and each external-dns instance would be responsible for updating the records if it's the owner.

There's a few operational checks that would need to exist around the Cloud Map Service resource (e.g. not deleting the service if other external-dns instances have records in there).

Any thoughts/opinions?

sagor999 commented 3 years ago

@buzzsurfr would be really cool if you can implement this feature! Some thoughts:

Looking forward to checking out merge request, as I am curious how it will be implemented.

k8s-triage-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

mamiu commented 3 years ago

/remove-lifecycle stale

CRASH-Tech commented 3 years ago

We can add assigned IP to TXT record, than external-dns will be known which record is own

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rifelpet commented 2 years ago

/remove-lifecycle stale

Rock981119 commented 2 years ago

I have the same requirement, is this feature under development?

If it is annotate nodeport svc, externaldns can add multiple A records at the same time. However, if Ingresses with the same domain name are published separately through IngressClass, only the A record in the first Ingress will be updated.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

jonpulsifer commented 2 years ago

/remove-lifecycle stale

evandam commented 2 years ago

Just adding it would be great to see a solution for handling things. I have a similar use case where we have a blue/green EKS cluster where both are running ExternalDNS and they would otherwise try to overwrite each other's Route53 records.

lucasmellos commented 2 years ago

I'm having the same issue as well. It's important for multi-cluster architectures

Eramirez33 commented 2 years ago

Hi everyone,

The workaround I applied on multiple clusters for the same domain was to pass the arg --txt-owner-id:

For cluster A:

        args:
        --source=service
        --source=address
        --domain-filter=midomain.ai # (optional) limit to only example.com domains; change to match the zone created above.
        --provider=cloudflare
        --cloudflare-proxied
        --log-level=debug
        --txt-owner-id=cluster-a ---> here change owner id

For cluster B:

        args:
        --source=service
        --source=input
        --domain-filter=midomain.ai # (optional) limit to example.com domains only; change to match the zone created above.
        --provider=cloudflare
        --cloudflare-proxied
        --log-level=debug
        --txt-owner-id=cluster-b ---> here change owner id

With this I was able to install the external-dns for my midomain.ai domain in multiple clusters since the --txt-owner-id is different for each cluster and with that it does not give the error.

For example, I use it for Cloudflare and the txt entries that it creates for each cluster looks like this:

Cluster A:

"heritage=external-dns,external-dns/owner=cluster-a,external-dns/resource=ingress/argocd/ingress-argocd"

Cluster B:

"heritage=external-dns,external-dns/owner=cluster-b,external-dns/resource=ingress/staging/stg-staging"

I hope this one will be helpful.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

steache commented 1 year ago

/remove-lifecycle stale

parkedwards commented 1 year ago

is there any known workaround for this? I attempted @Eramirez33's approach, but im still getting this conflict when both external-dns instances have unique owner-id:

Skipping endpoint...because owner id does not match...
gabrieloliveers commented 1 year ago

@Eramirez33 You save my life bro!!!

jbg commented 1 year ago

--txt-owner-id as suggested by @Eramirez33 works for the case of having different external-DNS instances managing different DNS names in the same zone. It doesn't work for having different external-DNS instances managing multiple A records for the same DNS name in the same zone, which was the original subject of this issue.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

nitrocode commented 1 year ago

/remove-lifecycle stale

airhorns commented 1 year ago

I encountered another use case for this (I think): DNS round robin between two different ingress controllers. We're trying to switch our ingress controller to a new stack, and we'd like to slowly move traffic over from one to the other. We have two ingresses in the same cluster with the same hostname but different ingress classes, and we were expecting external-dns to create two A records, one for each. It doesn't right now, it seems like the first one wins, but if we could set up two external-dns instances, we could work around this.

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rifelpet commented 9 months ago

/remove-lifecycle stale

vhurtevent commented 9 months ago

Hello, here is another use case : multiple clusters with external-dns in each providing DNS records and a Thanos for centralized metrics. Thanos can use DNS Service Discovery with a single DNS name to point at each Thanos sidecars as a target in each cluster.

In our situation, only the first cluster was able to create the A record for Thanos sidecar, the others complain about not being record's owner.

As I understand, external-dns can only work with a single source of truth which is its own kubernetes cluster. TXT records for ownership are only locks.

I am curious about how do you manage with this limitation ? In our situation, as clusters are spawned using Terraform, it could be possible to manage Thanos A records with TF but it lacks of automatic updates in case how service load balancers behave. Maybe with Kyberno we could manage a centralized DNSEndpoint ressource with multiple targets.

mimmus commented 8 months ago

Interested to this feature to switch an Ingress from a cluster to another.

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

nitrocode commented 5 months ago

/remove-lifecycle stale

pabclsn commented 5 months ago

This feature would be really appreciated for DNS Load Balancing :)

k0da commented 5 months ago

Hello, here is another use case : multiple clusters with external-dns in each providing DNS records and a Thanos for centralized metrics. Thanos can use DNS Service Discovery with a single DNS name to point at each Thanos sidecars as a target in each cluster.

In our situation, only the first cluster was able to create the A record for Thanos sidecar, the others complain about not being record's owner.

As I understand, external-dns can only work with a single source of truth which is its own kubernetes cluster. TXT records for ownership are only locks.

I am curious about how do you manage with this limitation ? In our situation, as clusters are spawned using Terraform, it could be possible to manage Thanos A records with TF but it lacks of automatic updates in case how service load balancers behave. Maybe with Kyberno we could manage a centralized DNSEndpoint ressource with multiple targets.

We solved similar case this way:

brutog commented 5 months ago

I would find this feature incredibly useful as well. Maybe I have 2 clusters and I want them load balanced.

Maybe I have an EKS and a GKE cluster, and I want traffic routed 70/30.

Maybe I have many clusters over many clouds and would like a single provider (e.g R53) to do multi-cluster DNS for all of them using geo-location so they get the closest cluster.

panditha commented 4 months ago

We are also looking for this feature to load balance traffic across multiple clusters.

evandam commented 4 months ago

I believe all of this can be done with proper annotations, depending on the DNS provider? For Route53 for example - https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#routing-policies

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

Xe commented 2 weeks ago

Hi, this affects me too. I'd like to have the same domain (xeiaso.net) broadcasted from multiple clusters through the same DNS name in the same DNS zone.

@evandam

I believe all of this can be done with proper annotations

I have tried annotations, it has failed.

Xe commented 2 weeks ago

/remove-lifecycle stale