Open KunWuLuan opened 1 year ago
It's a good idea I feel like there can be two ways
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hi @KunWuLuan , is there any ideas on this issue? Currently, we are encountering the problem that after postfilter checks the entire podgroup, it cannot set nominated node in batches.
Hi @KunWuLuan , is there any ideas on this issue? Currently, we are encountering the problem that after postfilter checks the entire podgroup, it cannot set nominated node in batches.
@shadowdsp Yes, currently the preemption plugin will only set nominated node for the pod that triggers the preemption. We have a custom plugin to solve the problem, but currently it seems that the plugin is for the specific situation.
I noticed that when using coscheduling, preempt only one pod a time is not enough.
So I am considering about implement a plugin to support preempt more than one pod a time.
I have implemented a prototype of batch preemption. The batch preemption plugin (will be called BP) will interact with other plugin. BP provide an interface with 2 functions for other plugins: 1. to query the list of pods ready for preemption and 2. do preemption if current pod schedule successfully or preempt successfully. If a pod find some victims on a node, the plugin will record the result and do nothing.If some other plugins decide to do the preemption after a successful scheduling or successful preemption, they can call the interface, then BP will preempt all victims it has recorded. Of course, the prototype has already taken into account the situation where the Pod in the record is preempted again by other Pods, or the Pod in the record is rescheduled. This is a bit complicated. If necessary, I will add it later
The point is that I believe the user interaction in the prototype is unreasonable. In the prototype, BP's behavior is hard coded. I think we may need a CRD for BP to decide when to do preemption. Do you have some ideas?