Closed mtaufen closed 1 year ago
/remove-lifecycle stale
@mtaufen are you still considering to pick this up again?
@mtaufen are you still considering to pick this up again?
+1 for this. Do we still work on this?
I don't currently have time to work on it. But I'm happy to consult if someone wants to move it forward.
The approach has a lot of sharp edges right now, because it just exposes the Kubelet's whole config, and many of those fields don't make sense to be dynamic. That should probably be discussed before moving it forward to GA.
So https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ this document actually does not work?
So https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ this document actually does not work?
@aisensiy It works fine for me.
So https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ this document actually does not work?
@aisensiy It works fine for me.
My fault. I did not set the dynamic-config-dir
argument.
Hi @mtaufen
Enhancements Lead here. Any plans for this in 1.20?
Thanks, Kirsten
Not that I know of for 1.20. SIG-Node has been discussing deprecating this feature. @derekwaynecarr do you have a proposed roadmap for that work?
will we have an alternative feature? since we are using this feature now. @mtaufen
We don't have a plan for a native alternative that I am aware of; after deprecation it would be delegated to platform providers. ComponentConfig will continue to be available. Derek may have more details, can you post your use-case here so we have it documented and take it into consideration?
On Mon, Sep 14, 2020 at 7:48 PM Xianglin Gao notifications@github.com wrote:
will we have an alternative feature? since we are using this feature now. @mtaufen https://github.com/mtaufen
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/enhancements/issues/281#issuecomment-692429434, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG4TQNIDZE7SZ2IVF6JEXLSF3IXFANCNFSM4DJGDYXQ .
-- Michael Taufen Google SWE
@mtaufen our k8s cluster will have several node pools(a node pool is a batch of nodes witch have same configs), and each node pool will have it's own kubelet dynamic config configmap. when we wanna change the kubelet config (such as system-reserved, kube-reserved, eviction-hard ) of a node pool, we will just change the configmap of the ComponentConfig. that is convenient.
but without the kubelet dynamic config feature, we will have to update the kubelet config file according to the configmap on each nodes of the node pool.
so, I think we'd better have a native alternative feature, before we deprecate this one.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
We want to start deprecation of this feature in this release. Digging into the KEP and corresponding docs...
Enhancement issues opened in kubernetes/enhancements
should never be marked as frozen.
Enhancement Owners can ensure that enhancements stay fresh by consistently updating their states across release cycles.
/remove-lifecycle frozen
/remove-priority awaiting-more-evidence /assign @SergeyKanzhelev /milestone v1.22 /remove-kind feature /kind deprecation
As a follow up to : https://github.com/kubernetes/kubernetes/issues/100799#issuecomment-837023528
@ehashman @SergeyKanzhelev how do you plan on leveraging the release process for this deprecation in 1.22? Will you be announcing deprecation via docs/release notes/kubefeatures.go but leaving the full deprecation to another time (1.23?)
@kikisdeliveryservice it's up to Sergey, he is going to do the writeup for how to deprecate/KEP update.
With https://github.com/kubernetes/enhancements/pull/2717 merged this enhancement is all set for enhancements freeze.
Yep, sorry forgot to mark another PR that actually declares deprecation: https://github.com/kubernetes/enhancements/pull/2735
Hello @SergeyKanzhelev :wave:, 1.22 Docs release lead here.
This enhancement is marked as ‘Needs Docs’ for 1.22 release.
Please follow the steps detailed in the documentation to open a PR against dev-1.22 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Fri July 9, 11:59 PM PDT. Also, take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Should we say "GA" or "deprecation" for dynamic kubelet config?
Hi @SergeyKanzhelev 👋🏽 Supriya here, 1.22 Enhancements Shadow. Code Freeze is on July 8th, 2021 at EOD PST. All implementation PRs must be code complete and merged before the deadline. If PRs are not merged by code freeze, they will be removed from the 1.22 milestone.
I am currently tracking the open k/k PR for this KEP. Are there any additional PRs that are not linked or I should be tracking?
Please keep us in the loop if anything changes. Thank you 🙏🏽
Hi @SergeyKanzhelev, Just a reminder that we are one week away from the code freeze(July 8th, 2021) and I see that the PR is yet to be merged, due to this the issue will still be marked as atRisk
for 1.22 Release. Also, the Doc Placeholder PR deadline is on July 9th, 2021.
Hi @SergeyKanzhelev, I just wanted to send a reminder that we have just 1 more day to get all the remaining PRs merged before the code freeze deadline tomorrow on Thursday, July 8th at 18:00 Pacific Time.
Hello, is there an alternative proposed for DynamicKubeletConfig
flag or this feature is getting dropped completely and kubelet configs should be managed on disk on each node?
Hello, is there an alternative proposed for DynamicKubeletConfig flag or this feature is getting dropped completely and kubelet configs should be managed on disk on each node?
Also interested as I though that adapting the DynamicKubeletConfig
feature to allow updating the kubelet config without a process restart would be an option to explore (for updating certain kubelet configs at runtime e.g for kube-reserved
values of a running kubelet process without needing a restart)
@SergeyKanzhelev
Is there an alternative proposed for DynamicKubeletConfig? My colleague is working on multi-clusters, and he wants to align the kubelet config in cluster view.
kubelet restart
on worker nodes.config yaml file
. Is there an alternative proposed for DynamicKubeletConfig? My colleague is working on multi-clusters, and he wants to align the kubelet config in cluster view.
- Without this feature, he has to exec
kubelet restart
on worker nodes.- With this feature, it can be done by just distributing
config yaml file
.
There is no built-in alternative that will do a similar thing. kubelet needs to be restarted to pick up the new configuration. And file distribution needs to be implemented in some hosting-specific way.
My current thoughts about the pros and cons of this feature:
systemctl restart kubelet
.(There may be risks).In short, cons are more than pros.
I wonder how to unconfigure a cluster that have been configured with this feature ?
I’ve removed the configmap and the parameter spec.configSource.configMap from the node configuration. I’ve restarted kubelet service (more than once), but in the node configuration they are still traces of all the old configurations in the status part.
config:
active:
configMap:
kubeletConfigKey: kubelet
name: my-node-1-config
namespace: kube-system
resourceVersion: "11250"
uid: b8f94378-53d5-444f-9987-626ee2acdb53
So how can we clean this up?
I wonder how to unconfigure a cluster that have been configured with this feature ?
I’ve removed the configmap and the parameter spec.configSource.configMap from the node configuration. I’ve restarted kubelet service (more than once), but in the node configuration they are still traces of all the old configurations in the status part.
config: active: configMap: kubeletConfigKey: kubelet name: my-node-1-config namespace: kube-system resourceVersion: "11250" uid: b8f94378-53d5-444f-9987-626ee2acdb53
So how can we clean this up?
Have you tried removing the --dynamic-config-dir flag from kubelet on the worker nodes?
@josh-ferrell-vmw Yes, I removed the --dynamic-config-dir
flag from kubelet on the worker nodes
@josh-ferrell-vmw Yes, I removed the
--dynamic-config-dir
flag from kubelet on the worker nodes
@waldo2188 were you able to figure it out?
@SergeyKanzhelev No, I haven't found a solution yet.
https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ should be removed in 1.24 if this feature is removed.
Hi @SergeyKanzhelev ! 1.24 Enhancements team here. Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd. This enhancements is targeting removal
for 1.24, is that correct?.
Here’s where this enhancement currently stands:
The status of this enhancement is track as at risk
. Please update this issue description as well
Thanks!
Hi @SergeyKanzhelev 👋 1.24 Enhancements Team here. Reaching out as we're less than a week away from Enhancement Freeze on Thursday, February 3rd.
There's no update for this enhancement since last checkin, let me know if I missed anything.
Current status is at risk
Catching up on this. @gracenng this KEP is targeting 1.24 for the removal of the functionality. Does it still requires anything?
Hi @SergeyKanzhelev ,
Looks like deprecations and removal do not need PRR so I checked that box off. There's a todo
in your Graduation Criteria you need to address, as well as updating the latest-milestone
in kep to 1.24
The Enhancements Freeze is now in effect and this enhancement is removed from the release. Please feel free to file an exception.
/milestone clear
@gracenng can you please help me understand what was missing here? latest-milestone was updated in this PR: https://github.com/kubernetes/enhancements/pull/3208 All approvals were received. Is there anything else missing?
Hi @SergeyKanzhelev, I didnt see the it merged by Freeze time yesterday. Will get back to you if this needs an exception
Thank you! can you please re-apply the milestone please?
No exception required, status is now tracked
Hi @SergeyKanzhelev :wave: 1.24 Docs lead here.
This enhancement is marked as Needs Docs for the 1.24 release.
Please follow the steps detailed in the documentation to open a PR against the dev-1.24
branch in the k/website
repo. This PR can be just a placeholder at this time and must be created before Thursday, March 31st, 2022 @ 18:00 PDT.
Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Thanks!
Dynamic Kubelet Configuration
Node.Spec.ConfigSource
https://github.com/kubernetes/kubernetes/pull/60100pkg/kubelet/kubeletconfig/util/files/files.go
):--dynamic-config-dir
, other docs updates for 1.11