Open marquiz opened 3 years ago
Thanks for opening this @marquiz !! :smile:
To ensure that the sig is aware of and that communication has begun regarding this KEP, please add the mandatory Discussion Link to the Description above. For ref it is a "link to SIG mailing list thread, meeting, or recording where the Enhancement was discussed before KEP creation"
@kikisdeliveryservice the topic was discussed on SIG-Node on 2021-10-19. Meeting minutes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg
/sig node
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/milestone v1.25
Hello @marquiz :wave:, 1.25 Enhancements team here!
Just checking in as we approach enhancements freeze on 18:00 PST on Thursday June 16, 2022.
For note, This enhancement is targeting for stage alpha
for 1.25 release
Hereβs where this enhancement currently stands:
implementable
It looks like for this one, we would need to:
Graduation
section in the KEP with proper metadataDesign Details
section in the KEP and move the Test plan
and Graduation
sub-sections under this section.kep.yaml
file reflecting the latest milestone and stage information. Here is an example for reference.production readiness review
file stating the KEP-issue number, the stage you are planning for this release cycle(in this case alpha
) and the approver. Here is an example for reference.Open PR https://github.com/kubernetes/enhancements/pull/3004 addressing ^
For note, the status of this enhancement is marked as at risk
. Please keep the issue description up-to-date with appropriate stages as well. Thank you!
/stage alpha
Hello @marquiz π, just a quick check-in again.
The enhancements freeze for 1.25 starts on this Thursday, June 16, 2022 at 18:00 PM PT.
Please try to get the above mentioned action-items done before enhancements freeze :)
Note: the current status of the enhancement is still marked at-risk
.
Thanks @Atharva-Shinde for the help!
I now did the following updates:
We'll review this in SIG-Node tomorrow so more updates after that.
Hey @marquiz π A good news! Enhancements Freeze is now extended to next week till Thursday June 23, 2022 π So we now have one more week to submit the KEP :)
Hello @marquiz π, just a quick check-in again, as we approach the 1.25 enhancements freeze.
Please plan to get the open PR https://github.com/kubernetes/enhancements/pull/3004 merged before enhancements freeze on Thursday, June 23, 2022 at 18:00 PM PT which is just over 3 days away from now.
For note, the current status of the enhancement is atat-risk
. Thank you!
Hello, 1.25 Enhancements Lead here π. With Enhancements Freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone.
As a reminder, the criteria for enhancements freeze is:
implementable
Feel free to file an exception to add this back to the release. If you plan to do so, please file this as early as possible.
Thanks! /milestone clear
Hi @Atharva-Shinde @Priyankasaggu11929 I've retitled the PR (#3004) in order to reduce confusion misconceptions wrt some other KEPs and earlier work. Is it ok to retitle this issue as well?
Hello @marquiz, retitling the issue is perfectly fine. Thank you! :)
/retitle QoS-class resources
I have a query: how is this different from the built-in cpu
resource? Linux blockio lets you configure controls such as blkio.throttle.read_bps_device
and similarly, for CPU you can define requests and limits.
If the blockio case is like the existing cpu
approach, then I'm wary of permanently complicated the Kubernetes Pod API to support a particular, vendor specific technology.
If we want to let different Pods share resources, we should aim to make a much more generic mechanism. For example, allow two different Pods in the same namespace to aggregate their cpu
limit, agreeing between those two Pods to co-operate if they are scheduled onto the same node. Once we can share cpu
limits, we can look at extending that sharing to other kinds of resource such as an extended resource.
At the very least, I'd like to see the sort of thing I'm proposing clearly called out as an alternative in the KEP, before we merge it.
Hi @sftim, thanks for the review!
I have a query: how is this different from the built-in
cpu
resource? Linux blockio lets you configure controls such asblkio.throttle.read_bps_device
and similarly, for CPU you can define requests and limits.
Blkio is just one possible usage for this. At least one fundamental difference between blkio and cpu is that the "amount of blkio" is not (ac)countable in any meaningful way. For cpu we know how much there is and there are meaningful controls to allocate a portion of that. For blkio its more of throttling: there are potentially a multitude of devices which is hard to predict which ones are actually used by a pod and potentially all of the different storage devices have different characteristics (parameters), think about SSD vs. rotational drives etc.
If the blockio case is like the existing
cpu
approach, then I'm wary of permanently complicated the Kubernetes Pod API to support a particular, vendor specific technology.
There isn't anything vendor specific in this proposal. One example is an Intel technology but even that is based on a generic interface in the Linux kernel (resctrlfs) that also other vendors' corresponding technologies use.
If we want to let different Pods share resources, we should aim to make a much more generic mechanism. For example, allow two different Pods in the same namespace to aggregate their
cpu
limit, agreeing between those two Pods to co-operate if they are scheduled onto the same node. Once we can sharecpu
limits, we can look at extending that sharing to other kinds of resource such as an extended resource.
I wouldn't identify this as a resource sharing mechanism between pods. Yes, in some cases they might end up using the same resource but generally that's not the case. In the case of blockio the class would just specify the throttling/weight parameters for storage devices but it doesn't state anything what particular devices are used by a pod. Similarly for RDT, the class might determine what portion of cache it can use or how much memory bandwidth it can use but it doesn't say anything about which CPUs the pod is running on (i.e. which cache IDs it is using). In theses cases wwo pods belonging in the same class generally means that they have the "same level of throttling"
At the very least, I'd like to see the sort of thing I'm proposing clearly called out as an alternative in the KEP, before we merge it.
At least for now I think they are two different things.
/milestone v1.26 /label lead-opted-in (I'm doing this on behalf of @ruiwen-zhao / SIG-node)
To clarify why I think blkio is vendor-specific: only Linux nodes have this resource. Windows nodes have CPU and memory but they don't have blkio or a direct equivalent.
I'd like the KEP to make the difference clear to a reader who knows Kubernetes but isn't particular familiar with any of the QoS mechanisms that we propose to integrate with.
Hey @marquizπ, 1.26 Enhancements team here!
Just checking in as we approach Enhancements Freeze on 18:00 PDT on Thursday 6th October 2022.
This enhancement is targeting for stage alpha
for 1.26
Here's where this enhancement currently stands:
implementable
For this KEP, we would need to:
provisional
to implementable
and add reviewers/approversThe status of this enhancement is marked as at risk
. Please keep the issue description up-to-date with appropriate stages as well.
Thank you :)
Hello @marquiz π, just a quick check-in again, as we approach the 1.26 Enhancements freeze.
Please plan to get the action items mentioned in my comment above done before Enhancements freeze on 18:00 PDT on Thursday 6th October 2022 i.e tomorrow
For note, the current status of the enhancement is marked at-risk
:)
Hello π, 1.26 Enhancements Lead here.
Unfortunately, this enhancement did not meet requirements for enhancements freeze.
If you still wish to progress this enhancement in v1.26, please file an exception request. Thanks!
/milestone clear /label tracked/no /remove-label tracked/yes /remove-label lead-opted-in
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale The KEP is actively reviewed, and part of 1.28 SIG-Node plan
/milestone v1.28
/label lead-opted-in
Hi @marquiz π, Enhancements team here!
Just checking in as we approach enhancements freeze on 01:00 UTC Friday, 16th June 2023.
This enhancement is targeting for stage alpha
for 1.28 (correct me, if otherwise.)
Here's where this enhancement currently stands:
It looks like https://github.com/kubernetes/enhancements/pull/3004 will address most of these issues!
The status of this enhancement is marked as at risk
. Please keep the issue description up-to-date with appropriate stages as well. Thank you!
Hi @marquiz, just reaching out again before the enhancements freeze on 01:00 UTC Friday, 16th June 2023. This enhancement is currently at risk
. It looks like https://github.com/kubernetes/enhancements/pull/3004 will address most of the requirements. Let me know if I missed anything. Thanks!
Hello π, 1.28 Enhancements Lead here. Unfortunately, this enhancement did not meet requirements for v1.28 enhancements freeze. Feel free to file an exception to add this back to the release tracking process. Thanks!
/milestone clear
Hey @marquiz
1.28 Docs Shadow here.
Does this enhancement work planned for 1.28 require any new docs or modification to existing docs?
If so, please follows the steps here to open a PR against dev-1.28 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday 20th July 2023.
Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release.
Thank you!
@AdminTurnedDevOps This is removed from 1.28 release so I have marked this as "removed form release" and there is no need for 1.28 docs PR
Saw this was removed from milestone, will update the enhancement tracking!
/milestone clear
/milestone v1.29
Hello @marquiz π, 1.29 Enhancements team here!
Just checking in as we approach enhancements freeze on 01:00 UTC, Friday, 6th October, 2023.
This enhancement is targeting for stage alpha
for 1.29 (correct me, if otherwise)
Here's where this enhancement currently stands:
implementable
for latest-milestone: 1.29
. KEPs targeting stable
will need to be marked as implemented
after code PRs are merged and the feature gates are removed.For this KEP, it looks like https://github.com/kubernetes/enhancements/pull/3004 will address most of these issues. Please update the latest-milestone
to v1.29
and the alpha milestone to 1.29
in this PR.
The status of this enhancement is marked as at risk for enhancement freeze
. Please keep the issue description up-to-date with appropriate stages as well. Thank you!
Hi @marquiz, just checking in once more as we approach the 1.29 enhancement freeze deadline this week on 01:00 UTC, Friday, 6th October, 2023. The status of this enhancement is marked as at risk for enhancement freeze
.
It looks like https://github.com/kubernetes/enhancements/pull/3004 will address most of the requirements. Let me know if I missed anything. Thanks!
Hello π, 1.29 Enhancements Lead here. Unfortunately, this enhancement did not meet requirements for v1.29 enhancements freeze. Feel free to file an exception to add this back to the release tracking process. Thanks!
/milestone clear
/remove-label lead-opted-in
/stage alpha /milestone v1.30
Hello @marquiz , 1.30 Enhancements team here! Is this enhancement targeting 1.30? If it is, can you follow the instructions here to opt in the enhancement and make sure the lead-opted-in label is set so it can get added to the tracking board? Thanks!
/milestone clear
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Enhancement Description
k/enhancements
) update PR(s): https://github.com/kubernetes/enhancements/pull/3004k/k
) update PR(s):k/website
) update PR(s):Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.