openshift / secondary-scheduler-operator

Red Hat Certified optional operator for secondary schedulers
Apache License 2.0
14 stars 18 forks source link

Does secondary-scheduler-operator support deploying the secondary scheduler pod only on the master node? #86

Closed yanxiucai closed 8 months ago

yanxiucai commented 1 year ago

Hi experts,

Does secondary-scheduler-operator support deploy the secondary scheduler pod only on the master node? If yes, could you please provide the Sample CR just like https://github.com/openshift/secondary-scheduler-operator/tree/master#sample-cr ?

Thanks! Yanxiu Cai

ingvagabund commented 1 year ago

Based on https://github.com/openshift/secondary-scheduler-operator/blob/master/bindata/assets/secondary-scheduler/deployment.yaml there's no restriction on where a secondary scheduler runs. Also, there's currently no way to specify any additional scheduling constraints.

yanxiucai commented 1 year ago

Thanks @ingvagabund! Actually, I believe it is necessary to add this support. Running secondary scheduler pod on the master node would significantly improve scheduling performance.

ingvagabund commented 1 year ago

There are multiple aspects to consider:

Unless there's a strong customer case we do not provide new features by default. The current secondary scheduler operator makes it easier to deploy a secondary scheduler (in some default cases) without following the upstream documentation manually: https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/. We currently do not provide the same feature set as the default scheduler does since the nature of custom schedulers is unknown in advance.

ingvagabund commented 1 year ago

@yanxiucai do you have any specific use case in mind where running the secondary scheduler on the master nodes will improve your performance? Any additional features besides scheduling the scheduler on a master node that would help improve your use case?

yanxiucai commented 1 year ago

@ingvagabund Yes, we did some performance test with running the secondary scheduler on the master node and worker node.
When a secondary scheduler is running on the worker node, the overall time for scheduling a Pod with a Persistent Volume Claim is 40% longer than scheduling on the master node. Please note that everything was kept the same during the test, except for the running node of the secondary scheduler.

openshift-bot commented 1 year ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 10 months ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 9 months ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 8 months ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci[bot] commented 8 months ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/secondary-scheduler-operator/issues/86#issuecomment-1874701850): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.