Closed dr4Ke closed 3 years ago
For openstack-cloud-controller-manager, the nodes (load balancer members) are passed from the official cloud-controller-manager when creating external load balancers. If you want to customize the load balancer members, you can consider using ServiceNodeExclusion
feature gate.
For octavia-ingress-controller, what we could do is to copy the way that cloud-controller-manager gets the nodes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
To my knowledge, this should be done in
openstack-cloud-controller-manager
andoctavia-ingress-controller
.Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
In a v1.19 kubernetes cluster, all nodes are considered members except those which have the
node-role.kubernetes.io/master
label.What you expected to happen:
The
node-role.kubernetes.io/worker
label could be used to add only workers to the pool.How to reproduce it:
I'm using rancher v2.5 to deploy a v1.19 kubernetes cluster. I don't really know how other installation means use labels.
Anything else we need to know?:
I would do something like:
node-role.kubernetes.io/worker
label existence in the cluster, then use only these nodesnode-role.kubernetes.io/master
labelI guess this way it will not affect anyone relying on the
master
role.Environment: