Closed kbudakovskiy closed 5 years ago
It's because the flag of --leader-elect
has been deprecated, it will be honored without the presence of --config
. In other words, if you use --config
, the scheduler will only respect the "leader-elect" value from the Config object.
And I checked the code, if no leaderElection fields are specified, it will default to true
:
This explains why you hit this issue.
So for now, you can work it around by updating scheduler-extender-config.yaml
to:
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/run/kubernetes/scheduler.kubeconfig"
leaderElection:
leaderElect: false
algorithmSource:
policy:
file:
path: "/root/config/scheduler-extender-policy.json"
# YAML not supported yet
# https://github.com/kubernetes/kubernetes/issues/75852
Give a try and let me know if it works.
Many thanks, now the problem is gone, but pods still Pending. It looks like a custom scheduler doesn't see pods in schedule queue.
I added
SchedulerName string `json:"schedulerName"
And everything works well.
Thank you
Hi, thanks for your work! I try to create a custom schedule by your example: https://developer.ibm.com/articles/creating-a-custom-kube-scheduler/ I've got a main(default) scheduler and I want to have a extra custom scheduler for particular pods. I use prebuild scheduler container with kubeconfig for autorization:
This policy:
And:
Also I add modify REST API. added some markers. for ex:
This is my deployment:
But when deployed it pods fall in Pending:
And REST log is empty.
If I remove
- --config=/root/scheduler-extender-config.yaml
Than everithing works well. but with default shaduling policy.Why when I start use policy with scheduler-extender-config.yaml custom scheduler wants to become a leader and doesn't work?