kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.5k stars 1.29k forks source link

Create an external name service upon cluster creation #6477

Closed voor closed 1 year ago

voor commented 2 years ago

User Story

Creating this:

apiVersion: v1
kind: Service
metadata:
  labels:
    cluster: bob
    cluster.x-k8s.io/cluster-name: bob
  name: bob-controlplane-endpoint
  annotations:
    external-dns.alpha.kubernetes.io/hostname: bob.kubernetes.example.com 
spec:
  externalName: bob-cluster-apiserver-12345678.us-east-1.elb.amazonaws.com # contents of `spec.controlPlaneEndpoint.host`
  selector:
    cluster: bob
    cluster.x-k8s.io/cluster-name: bob
  type: ExternalName

Whenever a new cluster is created, and mutating as the underlying provider changes that field as well.

Detailed Description

Based heavily on https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/2902 but a leaner slice that focuses purely on enabling external-dns to work better with Cluster API, and moving it up to the cluster-api level to work across providers.

Anything else you would like to add:

[Miscellaneous information that will assist in solving the issue.]

/kind feature

fabriziopandini commented 2 years ago

I'm not sure I fully understand the request. A couple of questions to better understand the use case: Should this behavior be opt-in or it is expected to be the default behavior for all the clusters, no matter if using external-dns or not? The external name seems provider-specific, how can be this generalized to other providers?

voor commented 2 years ago

This should definitely be opt-in and probably behind a feature gate in Cluster API, since the behavior might not be preferable.

That being said, it does create a neat little area for Cluster API, since it could actually leverage this Service inside the management cluster as a means to abstract out the server addresses inside the kubeconfig file.

My example is using something provider specific, it has been a while since I've used other providers and I'm not sure if they're creating anything DNS-related or if they're purely IP based. which wouldn't actually matter.

Here's an example using an IP Address instead of a DNS name:

apiVersion: v1
kind: Service
metadata:
  labels:
    cluster: bob
    cluster.x-k8s.io/cluster-name: bob
  name: bob-controlplane-endpoint
  annotations:
    external-dns.alpha.kubernetes.io/hostname: bob.kubernetes.example.com 
spec:
  externalName: 192.168.1.110 # contents of `spec.controlPlaneEndpoint.host`
  selector:
    cluster: bob
    cluster.x-k8s.io/cluster-name: bob
  type: ExternalName
fabriziopandini commented 2 years ago

@yastij ^^

voor commented 2 years ago

This originally stemmed from the motivation that certain IP Addresses or DNS names might be extremely difficult to recreate if lost. I'm realizing that if Cluster API actually used this Service name in the kubeconfig or other files instead of the other way around you could actually still achieve this level of resiliency. The downside is that the UX on just grabbing the kubeconfig secret would be a little wonky, since something like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRU[removed for brevity]
    server: https://bob.namespace.svc.cluster.local:6443 # or just bob based on https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
  name: bob

would now no longer work outside the cluster without some manipulation.

Edit: Changed cluster name to align with our friend bob example above.

fabriziopandini commented 2 years ago

If I got this right the intent is to create a layer of indirection between CAPI and the infrastructure load balancer, we should consider pros and cons.

Given above points, I'm wondering if we are trying to solve the problem in the wrong layer; if what we need is a "reliable" ip address or DNS name, I think that a cleaner solution could be achieved if the infrastructure provider creates such ip/dns entry as part of the Cluster infrastructure, so it gets used both in the workload cluster and in the management cluster.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

fabriziopandini commented 1 year ago

/triage accepted waiting for more feedback on the direction this issues should take

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api/issues/6477#issuecomment-1368553533): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.