kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack
https://cluster-api-openstack.sigs.k8s.io/
Apache License 2.0
279 stars 252 forks source link

Allow changing the DNS Nameservers after initial creation #1603

Closed cwrau closed 4 weeks ago

cwrau commented 1 year ago

/kind feature

Describe the solution you'd like Currently, when you need to change the DNS servers of a cluster, or rather, all clusters, you have to manually go into OpenStack, remove the old DNS servers, add the new DNS servers, disable the capo validation webhook, wait for reconciliation, reenable the webhook and roll all machines.

We'd like for capo to do these things.

Anything else you would like to add:

mdbooth commented 1 year ago

This sounds like an interesting and desirable feature, but probably hard to implement. I may be wrong, but my guess is that none of our regular contributors are likely to implement this any time soon (read: ever).

However, if you or somebody you know would like to work on it I'd be happy to discuss how it could be implemented.

jichenjc commented 1 year ago

actually we have struggled this before when doing OCP enablement.. this sounds like a general question to update the DNS of deployment server (which pre-set DNS during deploymnet) process.. ? Anyway I agree it's a very special use case and if someone want to contribute it will be a plus but I don't think it's high priority anytime soon

mdbooth commented 1 year ago

From the POV of CAPO I think we'd want to set this on any cluster-created subnet. Do we directly use it anywhere else?

The issue with updating a subnet which already exists is that we don't currently do this or anything like it: cloud resources are immutable after creation. There are use cases where limited mutability might be desirable, though. This feels like one. Still a major change, though, and as @jichenjc not currently high enough on anybody's radar to be likely to happen.

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

nikParasyr commented 6 months ago

/remove-lifecycle stale

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

EmilienM commented 1 month ago

Almost a year without real work done here, I wonder if this will really happen or if we could close this one.

nikParasyr commented 1 month ago

So we actually hit this issue a couple of months ago. We had to (maybe the order is not 100% correct):

  1. remove the old DNS servers on Openstack level, add the new DNS servers on openstack level
  2. disable the capo validation webhook,
  3. update cluster definitions with the new dns_nameservers
  4. wait for reconciliation, reenable the webhook
  5. reroll new machines ( we did with clusterctl alpha rollout restart -n test machinedeployment/ or kubeadmcontrolplane/ )

I think having capo fully doing the process above is quite complicated. If capo can update and reconcile the dns_nameservers attribute on the subnet, it will (in my understanding) remove steps 1,2,4 of the process above and it would make it much easier:

  1. update dns_nameservers on cluster definition
  2. roll out new machines

I think this is also a much easier change on capo side ( but not 100% sure )

mdbooth commented 1 month ago

I remain in favour if of this feature. If anybody has time to implement it I will review. If not, we should probably allow it to die.

k8s-triage-robot commented 4 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 4 weeks ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1603#issuecomment-2194682797): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.