Open xiaoxubeii opened 3 years ago
/sig node
/assign @xiaoxubeii
Hi! This sounds really interesting and I'd love to help out, please let me know how I can help out with this!
/stage alpha /milestone v1.22
Hi @xiaoxubeii 👋 1.22 Enhancements Shadow here.
This enhancement is in good shape, some minor change requests in light of Enhancement Freeze on Thursday May 13th:
Thanks!
Hi @xiaoxubeii 👋 1.22 Enhancements shadow here.
To help SIG's be aware of their workload, I just wanted to check to see if SIG-Node will need to do anything for this enhancement and if so, are they OK with it? Thanks!
@gracenng Hey grace, I have updated necessary contents as follows:
sig-node approvers @derekwaynecarr @mrunalp are reviewing for that. I am waiting for lgtm/approve and merge as implementable
.
@gracenng sig-node approvers(Derek and Mrunal) have gave lgtm/approve. There are few prr review requests, I have updated and am waiting for next review round. I think we can catch up with the freeze day :)
Hi @xiaoxubeii , looks like your PRR was approved and the requested changes are all here. I have updated the status of this enhancement to tracked
Thank you for keeping me updated!
Thanks for your help. Also thanks very much to a lot of valuable review suggestions and helps from @derekwaynecarr @mrunalp @bobbypage @giuseppe @odinuge @johnbelamaric @ehashman Really appreciate that :)
Hello @xiaoxubeii 👋 , 1.22 Docs Shadow here.
This enhancement is marked as Needs Docs for 1.22 release.
Please follow the steps detailed in the documentation to open a PR against dev-1.22
branch in the k/website repo
. This PR can be just a placeholder at this time and must be created before Fri July 9, 11:59 PM PDT.
Also, take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Thank you!
@ritpanjw OK, thanks for reminding.
Hi @xiaoxubeii 🌞 1.22 enhancements shadow here.
In light of Code Freeze on July 8th, this enhancement current status is tracked
, and we're currently tracking kubernetes/kubernetes#102578 kubernetes/kubernetes/pull/102970
Please let me know if there is other code PR associated with this enhancement.
Thanks
Hi @xiaoxubeii 🌞 1.22 enhancements shadow here.
In light of Code Freeze on July 8th, this enhancement current status is
tracked
, and we're currently tracking kubernetes/kubernetes#102578 kubernetes/kubernetes/pull/102970Please let me know if there is other code PR associated with this enhancement.
Thanks
@gracenng It is all here, thanks.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This kep has been released.
This feature is alpha in v1.22. Any plan to make it beta?
BTW, I have some questions about the KEP.
memoryThrottlingFactor
is a node level setting; the app owner cannot set it in pod level.). memory.high
is like a soft limit and memory.max
is a hard limit.
Currently, there is only resource.limit
. No resource.softlimit
.
https://github.com/kubernetes/enhancements/pull/2571#discussion_r631195702 @derekwaynecarr 's comment on it.Meanwhile, the OOM is controlled by the kernel. If kubelet handles a pod that uses memory that exceeds the limit, it can easily add an OOMKilled event. For kernel killing, kubelet cannot get that directly. If memory.high==resource.lmit
, kubelet can kill the pod instead of kernel oom killing.
Is there a performance issue if the throttle factor is too small? For example, some pods like Java will always use ~85% memory and memory.high
will take effect continuously. The processes of the cgroup are throttled and put under heavy reclaim pressure. Will this be a risk of this feature?
memory.low
vs memory.min
: The kernel ability is not fully exposed to user API here as well.
memory.low
is like a soft limit and memory.request
is a hard request.
There is a comment in https://github.com/kubernetes/enhancements/pull/2571#pullrequestreview-658390067 @mrunalp.
Currently, there is only resource.request
. No resource.softrequest
.This should not have been closed as the feature is merely alpha. It either needs to continue graduating or would be deprecated. There is more work to do in either case.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
What are the barriers to moving this forward to, for example, beta and off by default?
Should we freeze this issue?
(I'd assume than all alpha and beta features that ship in Kubernetes should have their KEP issues frozen, so that we continue to track the associated work)
This feature is alpha in v1.22. Any plan to make it beta?
BTW, I have some questions about the KEP.
- about the memoryThrottlingFactor factor, this is tricky. The kernel ability is not fully exposed to user API(I mean
memoryThrottlingFactor
is a node level setting; the app owner cannot set it in pod level.).memory.high
is like a soft limit andmemory.max
is a hard limit. Currently, there is onlyresource.limit
. Noresource.softlimit
. KEP-2570: Support memory qos with cgroups v2 #2571 (comment) @derekwaynecarr 's comment on it.
Yes. memory.high
is more like a soft limit of memory, so we simply set (limits.memory or node allocatable memory)*memorThrottlingFactor
in alpha version, which memorThrottlingFactor
is 0.8 by default at this moment. In other words, we made (limits.memory or node allocatable memory)
as resource.softlimit
.
Meanwhile, the OOM is controlled by the kernel. If kubelet handles a pod that uses memory that exceeds the limit, it can easily add an OOMKilled event. For kernel killing, kubelet cannot get that directly. If
memory.high==resource.lmit
, kubelet can kill the pod instead of kernel oom killing.
kubelet will never KILL THE POD in this case, yet kernel does. Kernel will kill the container which memory usage is over limits.memory
, set to memory.max
in cgv2. There are no additional implementations of the kubelet in this KEP, other than setting correct values to cgv2.
Is there a performance issue if the throttle factor is too small? For example, some pods like Java will always use ~85% memory and
memory.high
will take effect continuously. The processes of the cgroup are throttled and put under heavy reclaim pressure. Will this be a risk of this feature? Yes, maybe. The default throttle factor which is .8 here is an experimental value. We will expose it to kubelet startup parameters when appropriate, maybe at beta stage.
memory.low
vsmemory.min
: The kernel ability is not fully exposed to user API here as well.memory.low
is like a soft limit andmemory.request
is a hard request. There is a comment in KEP-2570: Support memory qos with cgroups v2 #2571 (review) @mrunalp. Currently, there is onlyresource.request
. Noresource.softrequest
.
Actually memory.low
is not yet used in the KEP.
Just an idea on the factor.
Current proposal is memory.high = memory.limit * kubelet memoryThrottlingFactor
However, this would be a problem when the requested memory is close to the memory limit, and the throttling is too easy to reach. When users want to set the factor smaller, it may make no throttling due to high<request not being accepted.
A new proposal would be to make the factor based on the requested memory.
memory.high = memory.request + (memory.limit - memory.request) * kubelet memoryThrottlingFactor
With the first example above,
Is this a better factor design? I'm not sure if there are other scenarios for throttling the memory.
Limit 1000Mi\ Request \ factor | current design: memory.high | my proposal: memory.high |
---|---|---|
request 0Mi factor 0.6 |
600Mi | 600Mi |
request 500Mi factor 0.6 |
600Mi | 800Mi |
request 800Mi factor 0.6 |
max | 920Mi |
request 1Gi factor 0.6 |
max | max |
request 0Mi factor 0.8 |
800Mi | 800Mi |
request 500Mi factor 0.8 |
800Mi | 900Mi |
request 800Mi factor 0.8 |
max | 960Mi |
request 500Gi factor 0.4 |
max | 700Mi |
calculation memory.high method | memory.limit * kubelet memoryThrottlingFactor |
memory.request + (memory.limit - memory.request) * kubelet memoryThrottlingFactor |
Current proposal is
memory.high = memory.limit * kubelet memoryThrottlingFactor
how to pick the magic number memoryThrottlingFactor? I don't think a single value would work fine for all workloads.
I think it is better if these settings are exposed to the user.
IMHO, memory.high
should be mapped to something like softrequest
.
I shared a bit more context about this feature in my and @mrunalp kubecon talk regarding cgroupv2.
One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting memory.high
. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.
One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting
memory.high
. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.
Yes, especially for some Java applications, they may not need it and it is necessary to have an option per pod for this.
BTW, as you mentioned, the memory.high
may have side effects on some applications, the default 0.8 memoryThrottlingFactor may be not a good default value. I would like to set the default to 0 which means no memory.high
should be set by default or set to a higher value like 0.99 or 0.9 to avoid performance issues.
One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting
memory.high
. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.Yes, especially for some Java applications, they may not need it and it is necessary to have an option per pod for this.
BTW, as you mentioned, the
memory.high
may have side effects on some applications, the default 0.8 memoryThrottlingFactor may be not a good default value. I would like to set the default to 0 which means nomemory.high
should be set by default or set to a higher value like 0.99 or 0.9 to avoid performance issues.
Like bobby said, we need find a way to opt-out it on a per pod level, which means users can enable or disable it by setting something like soft request
in pod.
I tried to summarize things in https://docs.google.com/document/d/1p9awiXhu5f4mWsOqNpCX1W-bLLlICiABKU55XpeOgoA/edit?usp=sharing for discussions.
I tried to summarize things in https://docs.google.com/document/d/1p9awiXhu5f4mWsOqNpCX1W-bLLlICiABKU55XpeOgoA/edit?usp=sharing for discussions.
We discussed it at SIG Node meeting on Dec 6th: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.ph8vc9vhlcxy with the summary: “Collect feedback in the doc and then open a KEP update with alternatives”, discussion with timestamp: https://youtu.be/t3PcHj62f0c?t=686
This is now discussed in https://docs.google.com/document/d/1r-e_jWL5EllBgxtaNb9qI2eLOKf9wSuuuglTySWmfmQ/edit#heading=h.6nyswlzgrqq0.
I opened a draft PR https://github.com/kubernetes/kubernetes/pull/115371 accordingly.
/label lead-opted-in
/stage beta
/milestone v1.27
Hello @xiaoxubeii 👋, v1.27 Enhancements team here.
Just checking in as we approach enhancements freeze on 18:00 PDT Thursday 9th February 2023.
This enhancement is targeting for stage beta
for 1.27 (please correct me, if otherwise)
Here's where this enhancement currently stands:
implementable
for latest-milestone: 1.27
For this KEP, we would need to update the following:
kep.yaml
information that v1.27 is for alpha is not - if the opposite is true then no change would be needed in the file.beta
, present in the latest template.The status of this enhancement is marked as at risk
. Please keep the issue description up-to-date with appropriate stages as well. Thank you!
@pacoxu @mrunalp @xiaoxubeii any updates here? time is running very short, please get a KEP PR up soon resolving the issues described above, if you still want to hit beta
in 1.27.
@johnbelamaric We improved the design but think the feature should still be alpha in "v1.27"
Thanks @johnbelamaric. We do have a PR with the content you mentioned - https://github.com/kubernetes/enhancements/pull/3818.
Is there anything missing from that? And as @pacoxu, we aim to keep this in alpha for 1.27.
I'll take a look, I didn't see it before, sorry.
Ok, since you are staying in alpha
and there are no material changes to the PRR content, no new review or approval is needed.
Hi @xiaoxubeii @pacoxu , any updates on the above? I'm doing my best to move things into tracked. Thanks!
PRR approved:
Implementable, stage, and latest milestone are set:
KEP readme needs to be updated to the latest template:
@fsmunoz this one is good to go now
Thank you @SergeyKanzhelev.
implementable
for latest-milestone: 1.27
This enhancement is ready to be traced for graduation to beta alpha in v1.27.
/label tracked/yes
@SergeyKanzhelev @fsmunoz It is decided to keep alpha in v1.27 but with several updates. Please note that :)
/stage alpha
https://github.com/kubernetes/kubernetes/pull/115371 is submitted to sync the code with the KEP update in v1.27.
/assign
https://github.com/kubernetes/enhancements/pull/3818#issuecomment-1421729105
Another thought based on the kernel feature Introduce an memcg interface to trigger memory reclaim on a memory cgroup.
.
Hi @xiaoxubeii 👋,
Checking in as we approach 1.27 code freeze at 17:00 PDT on Tuesday 14th March 2023.
Please ensure the following items are completed:
For this enhancement, it looks like all linked PRs are merged.
Please let me know what other PRs in k/k I should be tracking for this KEP.
As always, we are here to help should questions come up. Thanks!
@xiaoxubeii the PR in the description looks like an old Docs PR from 1.22: https://github.com/kubernetes/website/pull/28566. Is there a newer PR for 1.27? Yesterday was the PRs ready for review deadline. If there's not a PR for 1.27, please create and populate one as soon as possible.
Enhancement Description
k/enhancements
) update PR(s):2571
k/k
) update PR(s):k/website
) update PR(s):k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):