kubernetes / cloud-provider-openstack

Apache License 2.0
613 stars 598 forks source link

[occm] The nodeSelector key is not empty in the helm chart in release v1.29.0 #2550

Closed babykart closed 2 weeks ago

babykart commented 6 months ago

This is a BUG REPORT :

/kind bug

What happened: Since the release v1.29.0, the nodeSelector key in the file values.yaml of the helm chart is no longer empty and therefore requires positioning the following label node-role.kubernetes.io/controlplane: "" on the workers where the daemonSet must be deployed.

What you expected to happen: The nodeSelector key in the file values.yaml of the helm chart should be empty.

nodeSelector: {}

How to reproduce it: Install occm v1.29.0 with the helm chart.

Environment:

Let me know if you need a PR. Regards.

dulek commented 6 months ago

This got introduced in #2346. @wwentland, would you care to comment here?

As a workaround, would just setting nodeSelector: {} in your values.yaml help here? Why would you think that no nodeSelector is a preference?

babykart commented 6 months ago

@dulek The workaround doesn't help if I need to add my own nodeSelector. The fact that there is already a default value, helm will 'add' the two values and therefore in this case I will be forced to set the default value to the nodes as well besides mine.

For example, I want to use my.corp.selector/front: "true", the nodeSelector generated by helm would be:

<...>
    spec:
      nodeSelector:
        my.corp.selector/front: "true"
        node-role.kubernetes.io/control-plane: ""
<...>
dulek commented 5 months ago

@babykart:

I was able to do what you need with this values.yaml:

nodeSelector:
  node-role.kubernetes.io/control-plane: null
  my.corp.selector/front: "true"

This gets my helm install --dry-run to render this:

    spec:
      nodeSelector:
        my.corp.selector/front: "true"

Based on Helm Docs.

Can I close this issue now?

babykart commented 5 months ago

@dulek thx. I didn't know about this feature of helm. But should we therefore consider that this is the new behavior from version 1.29?

dulek commented 5 months ago

I think so. I believe @wwentland motivation to change this was to follow what AWS provider does and it makes sense to me: https://github.com/kubernetes/cloud-provider-aws/blob/master/charts/aws-cloud-controller-manager/values.yaml#L14-L16. Your use case is still valid, but since we've figured out how to override the default, I think we should keep the current 1.29 behavior.

babykart commented 5 months ago

I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments. If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?

dulek commented 5 months ago

I admit I don't fully understand what the specific AWS implementation has to do with it. I only deploy Kubernetes clusters in on-premise environments.

But in the end AWS and OpenStack K8s clusters shouldn't be too different. The idea is that cloud-provider-openstack being part of the control plane, lands on the control plane nodes. This should basically be true for any platform and any cloud-provider.

If it comes to dealing with this specific implementation in the helm chart, wouldn't it make more sense to add a specific block of documentation in the README.md?

Sure thing, docs are always welcome! Can you prepare the PR? I'll be happy to review and approve it.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 2 weeks ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/2550#issuecomment-2307571207): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.