kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.58k stars 2.54k forks source link

Moving providers out of tree #4347

Open Raffo opened 5 months ago

Raffo commented 5 months ago

Over the course of many years, we have added a lot of providers, some of which are not maintained or developed. This creates the expectation on users that we can fix everything or make those providers well supported. With the introduction of the webhook mechanism we stopped adding new providers, which has given other companies and DNS providers the possibility to integrate with ExternalDNS without having to add new code in tree, essentially also decoupling them from this project's release cycle. The provider in tree also bring other challenges like the one of the very frequent dependency updates due to the update of the required libraries. I propose that we start moving all the providers out of tree, with the exception of the stable providers, for now. I'd love to start with the unmaintained one, then move by group until we reach the AWS, Azure and Google one which are the most maintained ones and we should understand the impact on the usability of the project or even on cloud provider offerings before moving them out.

Raffo commented 5 months ago

Notifying @seanmalloy @vinny-sabatini @alejandrojnm @saileshgiri @Sh4d1 @packi @assureddt @hughhuangzh @Hyzhou @michaeljguarino @tinyzimmer who are listed as maintainers/contributors of existing providers.

hughhuangzh commented 5 months ago

@Raffo what's the standard about the stable providers, and unmaintained one?

Raffo commented 5 months ago

@hughhuangzh what do you mean by "Standard"? Can you clarify your question?

hughhuangzh commented 5 months ago

@Raffo I mean how to judge the stable providers.

szuecs commented 5 months ago

@hughhuangzh https://github.com/kubernetes-sigs/external-dns?tab=readme-ov-file#status-of-providers

hans-m-song commented 5 months ago

Hi, given the webhooks will be the primary mechanism moving forward, could there be some consideration for https://github.com/kubernetes-sigs/external-dns/discussions/4230? Would be much appreciated.

Raffo commented 5 months ago

@hans-m-song thanks for the nudge to link to the discussion, I definitely didn't see that there, the discussions/issues dual source isn't the easiest to deal with 😅 let me answer you there.

EDIT: asked to convert the discussion there to an issue with a proposal.

costinm commented 4 months ago

Any chance the 'out of tree' providers can be set as additional containers at install ( by adding a helm install option with the image ), to avoid all the mess with setting up certificates, keys and maintaining/upgrading 2 different apps - in particular if any change is made in the API ?

Also it may be worth defining how stable is the webhook API - there are standards on how to encode DNS zones and entries, broadly adopted - it may be worth using it as part of both the webhook API and DNSEndoint - the naming in the API is not very aligned with DNS terms and more close to K8S.

Raffo commented 4 months ago

Any chance the 'out of tree' providers can be set as additional containers at install

Where do you think the code for those should be living? I considered several approaches, but if we keep all the source in this repo, even if we stop linking them in the same binary of external dns we are not solving anything, just making life more complicated for users.

Also it may be worth defining how stable is the webhook API

This is of course a requirement before moving anything out of tree. At the time of writing, we're still considering the webhook as "experimental" although we've seen quite some adoption which was successful as much as I can say. Note that we collect zero analytics on installations, so we don't really have tons of data on our users and we have to work with the info that we have.

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

frittentheke commented 1 month ago

/remove-lifecycle stale

costinm commented 1 month ago

I have been playing around with this - I have a small fork with no providers and mostly no sources (costinm/dns-sync) and a forked gcp out of tree provider (costinm/dns-sync-gcp). The idea is to have a stable interface - and host the provider (and source) as separate services, with some NetworkPolicy or mesh policy handling access and security.

It seems to be working relatively well so far.

The main benefits is that the 'sync' part has minimal depenencies.

I noticed there is already a remote source in the tree - but since source and provider share the same base 'get all endpoints' - it is relatively easy to abstract a source using the same json interface. In my experiment I'm looking to to 2-way sync, i.e. create in-cluster objects ( ServiceEntry ) based on the the DNS entries, as well as keep ownership in cluster resources instead of external database or TXT. It's mostly for a proof of concept of Istio DNS syncing - using the providers from external DNS.

mloiseleur commented 2 weeks ago

I took the time to look at which providers are unmaintained.

Here is a first list:

@Raffo @szuecs Wdyt of adding a deprecation warning on those providers for the next release ? And then, remove then after the release ? On the list, any provider to add or remove ?

Raffo commented 2 weeks ago

@mloiseleur I'm on board with that. I'm happy to approve a PR with the deprecation list.

mrueg commented 1 week ago

The upstream project for RancherDNS/RDNS got archived, so this is also likely a provider that can be removed.

pschichtel commented 1 week ago

This project might be of interest for people migrating: https://github.com/yxwuxuanl/cert-manager-lego-webhook, sadly it's unmaintained, but it can serve as a template to use lego.