Closed neolit123 closed 2 months ago
first 1.24 PR is here: https://github.com/kubernetes/kubernetes/pull/106973
I may take change autodetection to not mix containerd and docker sockets for docker 18.xx+
task.
/assign
I will send PRs for all the 1.24 changes as these are a bit tricky. But reviews will be appreciated. On Dec 14, 2021 10:08, "Paco Xu" @.***> wrote:
I may take change autodetection to not mix containerd and docker sockets for docker 18.xx+ task. /assign
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubeadm/issues/2626#issuecomment-993260455, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACRATCD2ZJZDIP3SYDUCF3UQ33RPANCNFSM5J2FWSNQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
second PR that changes the kubeadm defaults / auto detection is here: https://github.com/kubernetes/kubernetes/pull/107317
tasks are completed for 1.24, moving to 1.25 milestone.
1.25 cleanup PR is here: https://github.com/kubernetes/kubernetes/pull/110022
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
https://github.com/kubernetes/kubernetes/issues/106893#issuecomment-1465875895
pod-infra-container-image
removal will be postponed to v1.28 or later.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
so, one issue is that since we now support kubelet skew of N-3, we can't remove the flag --pod-infra-container-image in kubeadm deployments until 1.29 kubelet goes out of support (if my math is correct). EDIT: or 1.30 if the kubelet removes the flag in 1.30...
up until now we waited 2 releases before removing a flag, because the skew was N-1. but N-3 changes this.
so we either need to modify kubeadm upgrade
to wait for kubelet upgrade, check the version and then perform the flag cleanup (behavior change) or ask the kubelet maintainers to delay the removal of --pod-infra-container-image, until 1.29 goes out of support (if my math is correct).
@liggitt this is an example of the complexity at hand for N-3. cc @SataQiu @pacoxu @afbjorklund
x-posting a note here: https://github.com/kubernetes/kubernetes/issues/106893#issuecomment-1761481690
- kubelet on workers is upgraded out of band and we don't know what version the user will choose. https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/ if they choose 1.29 we could remove the flag from the systemd file, but if the kubelet is older we cannot.
having some way to indicate the version kubeadm should make a config for seems like it could help simplify this
- kubelet on workers is upgraded out of band and we don't know what version the user will choose. https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/ if they choose 1.29 we could remove the flag from the systemd file, but if the kubelet is older we cannot.
having some way to indicate the version kubeadm should make a config for seems like it could help simplify this
agreed, and seems like an oversight that kubeadm upgrade is not wrapping this kubelet upgrade and checking what version if kubelet the user installs, and then perform some action.
one quick fix might be to call "kubeadm upgrade ..." another time after the user upgrades a kubelet on a node. for control plane nodes this will be a bit slow.
what i don't like is the deprecation progression of the --pod-infra-container-image flag in the kubelet and i think it is messy.
that doesn't seem right, and will force all tools on top of kubelet to have version branching around the k8s support skew. if they care about the GC problem, that is..
https://github.com/kubernetes/kubernetes/pull/118544/files#r1359721938
what i don't like is the deprecation progression of the --pod-infra-container-image flag in the kubelet and i think it is messy.
- the flag was needed to pin the pause image and prevent GC
- in 1.29 the flag is no-op-ed
- in 1.30 it is planned for removal
Making it easy for tools to keep setting the no-op flag until the release where it was needed hits EOL would be nice, especially if it is ~zero cost for sig-node. I'd ask them about that.
Hi @liggitt any update about it?
kubeadm and scripts are still using this flag in 1.29. Can we remove this flag from kubelet in 1.30?
if cleanFlagSet.Changed("pod-infra-container-image") {
klog.InfoS("--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime")
_ = cmd.Flags().MarkDeprecated("pod-infra-container-image", "--pod-infra-container-image will be removed in 1.30. Image garbage collector will get sandbox image information from CRI.")
}
Or update the deprecated message with a different release?
In what release can we count on CRI having the info required so --pod-infra-container-image
no longer needs to be passed?
in 1.29 the flag is no-op-ed
If the flag is no-op and cheap for node to keep around, I'd ask if they can keep it until 1.29 is the oldest release.
Separately, kubeadm
knowing what kubelet version it is generating a config for seem like a really important gap to close, and would let kubeadm stop setting this flag for >=1.29 kubelets.
In what release can we count on CRI having the info required so
--pod-infra-container-image
no longer needs to be passed?
@adisky do you know? the problem here is the variance in CRI implementor schedules - e.g. if cri-o implements the latest CRI that has the feature, that would mean that users need to update cri-o, updating kubelet will not suffice.
in 1.29 the flag is no-op-ed
If the flag is no-op and cheap for node to keep around, I'd ask if they can keep it until 1.29 is the oldest release.
+1, but it seems if the flag is no-op-ed this would mean that users need to upgrade their container runtime as well. otherwise there will be nothing to handle the GC problem (CR is old, kubelet is no-op).
Separately,
kubeadm
knowing what kubelet version it is generating a config for seem like a really important gap to close, and would let kubeadm stop setting this flag for >=1.29 kubelets.
yes, that is the upgrade problem discussed earlier.
https://github.com/kubernetes/kubeadm/issues/2626#issuecomment-1763244783
currently the kubelet binary upgrade is done after kubeadm upgrade
and is manually done by the user.
it needs discussion - we need to figure out what to do with this.
https://github.com/kubernetes/kubernetes/issues/106893#issuecomment-1867197876 some updates about the pinned image feature in containerd side:
From containerd side, containerd v1.7.3 and containerd 1.6.22 includes the support of pinned image:
Pinned image support (https://github.com/containerd/containerd/pull/8720)
1.6.25+ and containerd 1.7.10+ fixed a bug cri: fix using the pinned label to pin image (https://github.com/containerd/containerd/pull/9381).
However, there are still two bugfix pull requests in progress:
EDITED Add below context The two WIP PR are not blockers. https://github.com/containerd/containerd/pull/9381 is not a blocker as well.(It will pin some more than expected.)
To use that feature, we at least should use containerd v1.7.3+ and containerd 1.6.22+.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
cleanup remaining flags pod-infra-container-image .. ? https://github.com/kubernetes/kubernetes/issues/106893
scoped this item in a dedicated ticket:
all other AIs are done here.
dockershim was removed from the kubelet in early 1.24 development. related flags are also being removed.
history here: https://github.com/kubernetes/kubeadm/issues/1412
a few tasks need to be performed to adapt kubeadm for 1.24 and later.
1.24:
unix:///var/run/cri-dockerd.sock
as the default docker socket? https://github.com/kubernetes/kubernetes/pull/1073171.25: in kubeadm 1.25 kubelet 1.23 will go out of support, because kubeadm 1.25 would only support kubelet 1.25 and 1.24:
--container-runtime=remote
https://github.com/kubernetes/kubernetes/pull/1100471.26
1.27
pod-infra-container-image
.. ? https://github.com/kubernetes/kubernetes/issues/106893