Open kerthcet opened 1 year ago
/remove-kind bug /kind feature
cc @Huang-Wei
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
@kerthcet @Huang-Wei Do we have no plan for this improvement?
I think it's a valid feature. I may need some volunteer to pick it up.
/remove-lifecycle rotten
I think it's a valid feature. I may need some volunteer to pick it up.
I think that @kerthcet already took this since he submitted the proposal here: https://github.com/kubernetes-sigs/scheduler-plugins/pull/661
I think that @kerthcet already took this since he submitted the proposal here: #661
ah i missed that (for so long). Will review by this weekend.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle frozen
Area
Other components
No response
What happened?
Currently, we use queuedPodInfo's InitialAttemptTimestamp in ordering, it's a static value which will lead to starvation, earlier submitted podGroups will always block the queue if unschedulable(backoff queue can somehow mitigate this but not solve this problem).
What I want to do is make podGroup's queueing timestamp as a criteria, it will be refreshed together with a new scheduling cycle, the general idea is we'll maintain a podGroups cache in coscheduling. I'll write an updated KEP to detail the design.
What did you expect to happen?
How can we reproduce it (as minimally and precisely as possible)?
No response
Anything else we need to know?
Related issues: https://github.com/kubernetes-sigs/scheduler-plugins/issues/110 https://github.com/kubernetes-sigs/scheduler-plugins/issues/429
Kubernetes version
None
Scheduler Plugins version
None