kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack
https://cluster-api-openstack.sigs.k8s.io/
Apache License 2.0
299 stars 258 forks source link

[Feature] Support for anti-affinity/affinity rules for the created machines #1378

Closed geetikabatra closed 7 months ago

geetikabatra commented 2 years ago

/kind feature

Openstack supports defining anti-affinity / affinity rules for VMs. This feature adds support for the user to specify affinity/anti-affinity grouping for the VMs.

Use case: The user created 3 Machine object and want all 3 VMs to run on different hosts to improve resiliency from host failures. This can easily be realized by creating the anti-affinity rule for the 3 VMs

Describe the solution you'd like [A clear and concise description of what you want to happen.]

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

mnaser commented 2 years ago

just for the record, it's possible to kinda do this by using serverGroupID -- however, it would be nice to make server groups a natively managed feature by CAPO.

seanschneeweiss commented 2 years ago

Some progress on this was made in https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/118 but this was never finished.

jichenjc commented 2 years ago

just for the record, it's possible to kinda do this by using serverGroupID -- however, it would be nice to make server groups a natively managed feature by CAPO.

maybe first allow set this then make CAPO natively support it (e.g create server group on the fly)?

mnaser commented 2 years ago

I think we don't lose anything by actually creating server groups with soft anti affinity by default, but allowing the user to change the affinity rule set if needed

huxcrux commented 2 years ago

I think there might be multiple scenarios where server-groups would make sense. I agree that soft anti affinity would be a good default.

Generally I assume one group for the control-plane and one for all workers would probably be sufficient for most clusters however I could see a few scenarios where a cluster could use a separate server-group per machine deployment.

I assume to add serverGroupName would be a good idea. If the group exists just use it otherwise create a new group with the name. I assume somehow storing if the group is created by CAPO wold be nice so we can clean up all resources created when removing a cluster.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

huxcrux commented 1 year ago

/remove-lifecycle stale

nikParasyr commented 1 year ago

I would very much like to see this in CAPO.

Currently we are looking into CAPI Runtime SDK to automate some manual steps that we have in creating a cluster. Most of them we can automate through it but ServerGroupID is a bit weird. We can use LifeCycleHooks to create them but we dont know the id beforehand. Mutating webhooks should be very fast and using them to create the servergroups is not a good option.

I would like something:

  1. be able to define ServerGroupName, capo does not create it but i can use webhooks to create them without the above problems. This would be much easier to implement for capo but would only be useful for people who use RuntimeSDK
  2. have something like ServerGroup where you can define policy and rules and capo creates the server group. This would be much easier for users as capo would also take care of creating the serverGroups etc. But i think it will be tricky to get right for cases where a user want to use the same servergroup across controlPlane and nodegroups, or even across different clusters in the same openstack project.
  3. Have a seperate controller and CRD OpenstackServerGroup and reference that one on OpenStackMachine`. I think this would be very nice for users as CAPO would create the serverGroups and i think it would remove the problems above. But it is much more work. It is also aligned with the idea to add more controllers as mentioned in #1286
EmilienM commented 1 year ago

Is it the same feature as requested on https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1256 ? If yes I'll close this one to avoid duplication.

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/1378#issuecomment-2067728430): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.