kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.56k stars 2.54k forks source link

Manage multiple zones with single ExternalDNS deployment using CRDs #1961

Open tsutsu opened 3 years ago

tsutsu commented 3 years ago

What would you like to be added:

I propose that ExternalDNS be extended with a distinct operational mode, where:

A DNSZoneBinding resource could contain a spec with fields like:

Why is this needed:

Right now, a separate deployment of ExternalDNS is needed for each provider+zone configuration.

For example, if I have a deployed project foo with two namespaces, foo-prod and foo-staging, where foo-prod contains an ingress with hostname foo.prod.example.com and foo-staging has a similar ingress with foo.staging.example.com; and where prod.example.com and staging.example.com are separate zones (with distinct providers, or under distinct accounts in the same provider), then I need to deploy ExternalDNS twice, once for each namespace.

Obviously, as well, if there are multiple tenants in a k8s cluster, each of them must run their own ExternalDNS deployment(s). With a lot of tenants, the overhead of this can add up!

I would much prefer that ExternalDNS adopts (or offers as an option) a model similar to cert-manager, where there exists only a single cluster-wide controller deployment, which is then "virtualized" with controller-configuration resources (Issuer and ClusterIssuer resources, in cert-manager's case) that tell it the configuration it should use when working with the resources that use/reference that controller-configuration resource.

Conveniently, as ExternalDNS already watches Service/Ingress/Endpoint resources for changes, it already has all the mechanism in place required to watch these controller-configuration resources for changes.

rumstead commented 3 years ago

+1, being able to reduce the number of external-dns deployments would be awesome. Additionally, being able to specify different keys per zone (like you mentioned in the CRD spec) would be a must-have so if a zone was overwhelming the DNS server, the key could be revoked.

sgreene570 commented 3 years ago

This is a very interesting proposal @tsutsu!

@Raffo @seanmalloy do you have any specific thoughts on this?

apex-omontgomery commented 3 years ago

This would be very helpful.

In our clusters we want to independently control the following variables:

This is very helpful for the following cases:

Our method of this is a little wonky as it requires duplication in most cases, but allows users to opt in. We expose a series of annotations that a consumer must use to indicate how they want their records to be created. Imagine something like:

annotations:
  traffic.company.com/dns.google.ingress: region1.company.com

Which gets filtered by the - --annotation-filter argument.

The biggest drawback of this method is that we often duplicate values, or consumers are unaware of these requirements.

apiVersion: v1
kind: Service
metadata:
  name: service1
  namespace: ns1
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
    external-dns.alpha.kubernetes.io/hostname: service1.region1.company.com
    traffic.internal.apexclearing.com/dns.google.service: region1.company.com
  labels:
    app: cloudsql-proxy
spec:
  type: LoadBalancer
  selector:
    app: cloudsql-proxy
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
sgreene570 commented 3 years ago

Our method of this is a little wonky as it requires duplication in most cases, but allows users to opt in. We expose a series of annotations that a consumer must use to indicate how they want their records to be created. Imagine something like:

Thanks for your input @wimo7083. Curious, how many instances of ExternalDNS are you running in parallel for your setup?

apex-omontgomery commented 3 years ago

In most clusters we run between 2 and 6 instances. We used multiple clusters initially as a way to get around our lack of multi-tenancy, and as we add better controls we can consolidate clusters.

One of the problems we've found is that when we add a new dimension ex: $sub-service-$service.$region.$company.com -> $sub-service.$service.$region.$company.com is that the dimensions that would reduce the blast radius the most, add the most maintenance overhead.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

seanmalloy commented 2 years ago

/remove-lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

mboutet commented 2 years ago

/remove-lifecycle rotten

Kaelten commented 2 years ago

Ran across this as I'm trying to figure out the best way to setup a dual setup between CloudFlare and Route53.

Idea being that I want to be able to have name.production.company.net on route53 and then www.product.com on cloudflare and have cloudflare proxy to the other domain cleanly.

henninge commented 2 years ago

Hi @Kaelten ! As the issue state you'd currently have to create two instances of ExternalDNS for your situation. Each would have the respective provider configuration and domain filter set.

Although I am not sure what you mean by "proxy to the other domain". If this is just a CNAME you might be fine with a static configuration in Cloudfare (e.g. via terraform) and just an ExternalDNS for route53. If you are talking about an acutal reverse http proxy your setup question is beyond the scope of ExternalDNS (and this ticket).

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

DerEnderKeks commented 2 years ago

/remove-lifecycle stale

sagikazarmark commented 1 year ago

Running multiple instances is an acceptable workaround for some cases, but it would be nice if we could get away with a single instance supporting multiple configurations.

In addition to the above, adding support for namespace separation of different configurations would also be nice (eg. make sure that only authorized namespaces can use a specific zone in a multi-tenant cluster)

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

DerEnderKeks commented 1 year ago

/remove-lifecycle stale

stevehipwell commented 1 year ago

This kind of multi tenancy pattern would be really useful and IMHO align well with the use of the Gateway API.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

DerEnderKeks commented 1 year ago

/remove-lifecycle stale

sabinayakc commented 1 year ago

Would love for this feature.

I have two hosted zones: public and private. Being able to apply changes to both hosted zones with a single deployment would be great.

KTamas commented 1 year ago

Not a competition, but we have way more hosted zones and having a separate external-dns instance for each is a pain. Not a major pain, but still a pain.

anthonysomerset commented 1 year ago

for me i can see the use when we operate a split dns type setup with an internal zone and external zone - we have a few edge cases where both zones need same data

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

rumstead commented 7 months ago

/remove-lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

DerEnderKeks commented 4 months ago

/remove-lifecycle stale

nikolaiderzhak commented 4 months ago

Another use case is when multiple managed zones are in different subscriptions of Azure or AWS accounts. So you need to assume roles, etc. cert-manager does it pretty well.

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

DerEnderKeks commented 1 month ago

/remove-lifecycle stale