kubernetes / cloud-provider-openstack

Apache License 2.0
619 stars 610 forks source link

[occm] How to correctly set the cluster-name for the occm only. #1386

Closed danmikita closed 3 years ago

danmikita commented 3 years ago

Is this a BUG REPORT or FEATURE REQUEST?: /kind feature

What happened: The name of the created lbaas references the "cluster-name" from the startup flags of the daemon set.

What you expected to happen: I expected the cluster-name flag to reference the name of the cluster from the kubernetes-controller-manager --cluster-name= flag.

How to reproduce it: Change the --cluster-name= flag on the kubernetes controller manager startup arguments and see that the daemonset still defaults to "kubernetes" as the cluster-name.

Anything else we need to know?: My current solution is to set the --cluster-name in the daemon-set manifest.

lingxiankong commented 3 years ago

OCCM is using cluster name configured in kubernetes-controller-manager. Could you provide the detailed steps of you operation?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/1386#issuecomment-867996987): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
vs49688 commented 3 years ago

I've just hit this too. I have three clusters (k0s-built) in the same tenant. Each cluster has different --cluster-name= values on their kube-controller-managers.

I'd expect load balancers to be created with unique names, but they're still created using the default kubernetes. And now the three clusters continuously fight over who owns the LBs.

lingxiankong commented 3 years ago

The cloud controller manager is a separate binary and has its own parameters, you have to specify different cloud names for different CCMs.

The kube-controller-manager is not responsible for creating Service of LoadBalancer type anymore.