kubernetes / cloud-provider-openstack

Apache License 2.0
616 stars 601 forks source link

Use node-role label "worker" for pool members when available #1270

Closed dr4Ke closed 3 years ago

dr4Ke commented 3 years ago

To my knowledge, this should be done in openstack-cloud-controller-manager and octavia-ingress-controller.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug /kind feature

What happened:

In a v1.19 kubernetes cluster, all nodes are considered members except those which have the node-role.kubernetes.io/master label.

What you expected to happen:

The node-role.kubernetes.io/worker label could be used to add only workers to the pool.

How to reproduce it:

I'm using rancher v2.5 to deploy a v1.19 kubernetes cluster. I don't really know how other installation means use labels.

Anything else we need to know?:

I would do something like:

I guess this way it will not affect anyone relying on the master role.

Environment:

lingxiankong commented 3 years ago

For openstack-cloud-controller-manager, the nodes (load balancer members) are passed from the official cloud-controller-manager when creating external load balancers. If you want to customize the load balancer members, you can consider using ServiceNodeExclusion feature gate.

For octavia-ingress-controller, what we could do is to copy the way that cloud-controller-manager gets the nodes.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

jichenjc commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/1270#issuecomment-869300151): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.