Closed voor closed 1 year ago
I'm not sure I fully understand the request. A couple of questions to better understand the use case: Should this behavior be opt-in or it is expected to be the default behavior for all the clusters, no matter if using external-dns or not? The external name seems provider-specific, how can be this generalized to other providers?
This should definitely be opt-in and probably behind a feature gate in Cluster API, since the behavior might not be preferable.
That being said, it does create a neat little area for Cluster API, since it could actually leverage this Service
inside the management cluster as a means to abstract out the server addresses inside the kubeconfig file.
My example is using something provider specific, it has been a while since I've used other providers and I'm not sure if they're creating anything DNS-related or if they're purely IP based. which wouldn't actually matter.
Here's an example using an IP Address instead of a DNS name:
apiVersion: v1
kind: Service
metadata:
labels:
cluster: bob
cluster.x-k8s.io/cluster-name: bob
name: bob-controlplane-endpoint
annotations:
external-dns.alpha.kubernetes.io/hostname: bob.kubernetes.example.com
spec:
externalName: 192.168.1.110 # contents of `spec.controlPlaneEndpoint.host`
selector:
cluster: bob
cluster.x-k8s.io/cluster-name: bob
type: ExternalName
@yastij ^^
This originally stemmed from the motivation that certain IP Addresses or DNS names might be extremely difficult to recreate if lost. I'm realizing that if Cluster API actually used this Service
name in the kubeconfig or other files instead of the other way around you could actually still achieve this level of resiliency. The downside is that the UX on just grabbing the kubeconfig secret would be a little wonky, since something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRU[removed for brevity]
server: https://bob.namespace.svc.cluster.local:6443 # or just bob based on https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
name: bob
would now no longer work outside the cluster without some manipulation.
Edit: Changed cluster name to align with our friend bob example above.
If I got this right the intent is to create a layer of indirection between CAPI and the infrastructure load balancer, we should consider pros and cons.
Given above points, I'm wondering if we are trying to solve the problem in the wrong layer; if what we need is a "reliable" ip address or DNS name, I think that a cleaner solution could be achieved if the infrastructure provider creates such ip/dns entry as part of the Cluster infrastructure, so it gets used both in the workload cluster and in the management cluster.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/triage accepted waiting for more feedback on the direction this issues should take
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
User Story
Creating this:
Whenever a new cluster is created, and mutating as the underlying provider changes that field as well.
Detailed Description
Based heavily on https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/2902 but a leaner slice that focuses purely on enabling external-dns to work better with Cluster API, and moving it up to the cluster-api level to work across providers.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature