kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.97k stars 39.62k forks source link

[FG:InPlacePodVerticalScaling] Slow reconcile when quickly reverting resize patch #125205

Closed hjet closed 2 days ago

hjet commented 5 months ago

What happened?

While working on https://github.com/kubernetes/kubernetes/pull/125202 and testing the second case (patching a pod to perform an in-place resize and then quickly reverting the patch before the resize has been actuated), I've discovered some unexpected behavior. In this case the pod eventually reconciles but it takes about 3 minutes, with the following test output:

[sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling] Burstable QoS pod, three containers - no change for c1, increase c2 resources, decrease c3 (net decrease for pod) [sig-node, Feature:InPlacePodVerticalScaling]
k8s.io/kubernetes/test/e2e/node/pod_resize.go:1281
  STEP: Creating a kubernetes client @ 05/29/24 18:13:28.391
  I0529 18:13:28.391963 3291 util.go:499] >>> kubeConfig: /root/kind-test-config
  I0529 18:13:28.393961 3291 util.go:508] >>> kubeContext: kind-kind
  STEP: Building a namespace api object, basename pod-resize @ 05/29/24 18:13:28.394
  STEP: Waiting for a default service account to be provisioned in namespace @ 05/29/24 18:13:28.404
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 05/29/24 18:13:28.406
  STEP: Creating a kubernetes client @ 05/29/24 18:13:28.408
  I0529 18:13:28.408474 3291 util.go:499] >>> kubeConfig: /root/kind-test-config
  I0529 18:13:28.409838 3291 util.go:508] >>> kubeContext: kind-kind
  STEP: Building a namespace api object, basename pod-resize-resource-quota @ 05/29/24 18:13:28.41
  STEP: Waiting for a default service account to be provisioned in namespace @ 05/29/24 18:13:28.416
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 05/29/24 18:13:28.417
  STEP: Creating a kubernetes client @ 05/29/24 18:13:28.419
  I0529 18:13:28.419504 3291 util.go:499] >>> kubeConfig: /root/kind-test-config
  I0529 18:13:28.420717 3291 util.go:508] >>> kubeContext: kind-kind
  STEP: Building a namespace api object, basename pod-resize-errors @ 05/29/24 18:13:28.42
  STEP: Waiting for a default service account to be provisioned in namespace @ 05/29/24 18:13:28.427                                                                                                                                                                                                                                                                        
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 05/29/24 18:13:28.428                                                                                                                                                                                                                                                                                 
  STEP: creating pod @ 05/29/24 18:13:28.429                                                                                                                                                                                                                                                                                                                                
  STEP: verifying the pod is in kubernetes @ 05/29/24 18:13:30.448                                                                                                                                                                                                                                                                                                          
  STEP: verifying initial pod resources, allocations, and policy are as expected @ 05/29/24 18:13:30.453                                                                                                                                                                                                                                                                    
  STEP: verifying initial pod status resources and cgroup config are as expected @ 05/29/24 18:13:30.453                                                                                                                                                                                                                                                                    
  STEP: patching pod for resize @ 05/29/24 18:13:30.457                                                                                                                                                                                                                                                                                                                     
  STEP: verifying pod patched for resize @ 05/29/24 18:13:30.47                                                                                                                                                                                                                                                                                                             
  STEP: patching pod for rollback @ 05/29/24 18:13:30.516                                                                                                                                                                                                                                                                                                                   
  STEP: verifying pod patched for rollback @ 05/29/24 18:13:30.529                                                                                                                                                                                                                                                                                                          
  STEP: waiting for rollback to be actuated @ 05/29/24 18:13:30.529                                                                                                                                                                                                                                                                                                         
  I0529 18:13:30.534186 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 -- 
 kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod -- ls 
  /sys/fs/cgroup/cgroup.controllers'
  I0529 18:13:30.679258 3291 builder.go:146] stderr: "Defaulted container \"c1\" out of: c1, c2, c3\n"
  I0529 18:13:30.679299 3291 builder.go:147] stdout: "/sys/fs/cgroup/cgroup.controllers\n"
  I0529 18:13:30.679331 3291 pod_resize.go:379] Namespace pod-resize-4426 Pod testpod Container c1 - looking for cgroup value 209715200 in path /sys/fs/cgroup/memory.max
  I0529 18:13:30.679379 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c1 -- head -n 1 /sys/fs/cgroup/memory.max'
  I0529 18:13:30.819936 3291 builder.go:146] stderr: ""
  I0529 18:13:30.820015 3291 builder.go:147] stdout: "209715200\n"
  I0529 18:13:30.820050 3291 pod_resize.go:379] Namespace pod-resize-4426 Pod testpod Container c1 - looking for cgroup value 20000 100000 in path /sys/fs/cgroup/cpu.max
  I0529 18:13:30.820117 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c1 -- head -n 1 /sys/fs/cgroup/cpu.max'
  I0529 18:13:30.907604 3291 builder.go:146] stderr: ""
  I0529 18:13:30.907650 3291 builder.go:147] stdout: "20000 100000\n"
  I0529 18:13:30.907669 3291 pod_resize.go:379] Namespace pod-resize-4426 Pod testpod Container c1 - looking for cgroup value 4 in path /sys/fs/cgroup/cpu.weight
  I0529 18:13:30.907714 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c1 -- head -n 1 /sys/fs/cgroup/cpu.weight'
  I0529 18:13:30.993819 3291 builder.go:146] stderr: ""
  I0529 18:13:30.993869 3291 builder.go:147] stdout: "4\n"
  I0529 18:13:30.993895 3291 pod_resize.go:379] Namespace pod-resize-4426 Pod testpod Container c2 - looking for cgroup value 314572800 in path /sys/fs/cgroup/memory.max
  I0529 18:13:30.993951 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c2 -- head -n 1 /sys/fs/cgroup/memory.max'
  I0529 18:13:31.078830 3291 builder.go:146] stderr: ""
  I0529 18:13:31.078876 3291 builder.go:147] stdout: "367001600\n"
  I0529 18:13:33.080354 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c2 -- head -n 1 /sys/fs/cgroup/memory.max'
  I0529 18:13:33.205904 3291 builder.go:146] stderr: ""
  I0529 18:13:33.205947 3291 builder.go:147] stdout: "367001600\n"
  I0529 18:13:35.207569 3291 builder.go:121] Running '/usr/local/bin/kubectl --server=https://127.0.0.1:60962 --kubeconfig=/root/kind-test-config --context=kind-kind --namespace=pod-resize-4426 exec testpod --namespace=pod-resize-4426 --container=c2 -- head -n 1 /sys/fs/cgroup/memory.max'
  I0529 18:13:35.335993 3291 builder.go:146] stderr: ""
  I0529 18:13:35.336037 3291 builder.go:147] stdout: "367001600\n"

With the last couple of lines continuing indefinitely until it reads the "rolled back" value from memory.max in c2 (3 mins in my case). This also happens inconsistently, you may have to rerun the test case a couple of times to hit it. You can also run the full suite (-ginkgo.focus="Feature:InPlacePodVerticalScaling") to hit it instead of the specific case provided below.

What did you expect to happen?

After patching forwards and patching backwards, the pod should reach its initial state, and memory.max in c2 should be set to its initial value.

How can we reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

This could also be an issue with my test changes, so please flag anything that seems suspicious. If you leave patchAndVerify uncommented (so patch -> wait for resize -> patch back -> wait for resize) and then run patchAndVerifyAborted (on the same case), you will hit a different bug, which I will file separately.

Kubernetes version

```console $ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"31+", GitVersion:"v1.31.0-alpha.0.947+1ff1207d22ab5c-dirty", GitCommit:"1ff1207d22ab5cf442c8dafdf5bded1e32519873", GitTreeState:"dirty", BuildDate:"2024-05-28T19:28:33Z", GoVersion:"go1.22.3", Compiler:"gc", Platform:"linux/arm64"} WARNING: version difference between client (1.25) and server (1.31) exceeds the supported minor version skew of +/-1 ```

Cloud provider

None

OS version

```console # On Linux: $ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" $ uname -a Linux docker-desktop 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 GNU/Linux # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ```

Install tools

kind version 0.17.0

Container runtime (CRI) and version (if applicable)

```console NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready control-plane 6h29m v1.31.0-alpha.0.947+1ff1207d22ab5c-dirty 172.18.0.4 Ubuntu 22.04.1 LTS 5.15.49-linuxkit containerd://1.6.9 kind-worker Ready 6h29m v1.31.0-alpha.0.947+1ff1207d22ab5c-dirty 172.18.0.2 Ubuntu 22.04.1 LTS 5.15.49-linuxkit containerd://1.6.9 kind-worker2 Ready 6h29m v1.31.0-alpha.0.947+1ff1207d22ab5c-dirty 172.18.0.3 Ubuntu 22.04.1 LTS 5.15.49-linuxkit containerd://1.6.9 ```

Related plugins (CNI, CSI, ...) and versions (if applicable)

hjet commented 5 months ago

/sig node

hshiina commented 5 months ago

I reproduced a similar issue locally. I will investigate. /assign

hshiina commented 5 months ago

I reproduced this issue as follows:

  1. Enable the InPlacePodVerticalScaling feature gate.

  2. Create a pod which has a container whose resizePolicy is NotRequired and a container whose restartPolicy is RestartContainer:

    ``` apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: resize-pod name: resize-pod spec: containers: - image: busybox name: c1 command: - sh - -c - trap "exit 0" SIGTERM; while true; do sleep 1; done resources: requests: cpu: 100m memory: 100Mi limits: cpu: 200m memory: 200Mi resizePolicy: - resourceName: cpu restartPolicy: NotRequired - resourceName: memory restartPolicy: NotRequired - image: busybox name: c2 command: - sh - -c - trap "exit 0" SIGTERM; while true; do sleep 1; done resources: requests: cpu: 200m memory: 200Mi limits: cpu: 300m memory: 300Mi resizePolicy: - resourceName: cpu restartPolicy: NotRequired - resourceName: memory restartPolicy: RestartContainer - image: busybox name: c3 command: - sh - -c - trap "exit 0" SIGTERM; while true; do sleep 1; done resources: requests: cpu: 300m memory: 300Mi limits: cpu: 400m memory: 400Mi resizePolicy: - resourceName: cpu restartPolicy: NotRequired - resourceName: memory restartPolicy: NotRequired restartPolicy: Always ```
  3. After the pod gets started, post a patch to resize containers. Then, post a patch to rollback the resources before the first resize completes:

    kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"50m","memory":"50Mi"},"limits":{"cpu":"150m","memory":"150Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"350m","memory":"350Mi"},"limits":{"cpu":"450m","memory":"450Mi"}}}]}}'   
    sleep 1 
    kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"100m","memory":"100Mi"},"limits":{"cpu":"200m","memory":"200Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"200m","memory":"200Mi"},"limits":{"cpu":"300m","memory":"300Mi"}}}]}}'

Then, it takes about three minutes to complete the rollback.

hshiina commented 5 months ago

This issue happened as follows:

  1. The first resize was requested. Then, kubelet started to resize both containers.
  2. While the first resize was in progress, the second resize (rollback) was requested.
  3. The first resize was done in kubelet.
  4. The Pod status on API was updated with the result of the first resize. c1 was resized and c2 was restarted with the new resources.
  5. kubelet started the second resize soon. However, kublet started to resize only the container c1 whose resizePolicy is NotRequired. … Problem-1
  6. The pod status on API was degraded to the status before the first resize. In c1, resources looked rolled back. In c2, its container ID was the older one and the older one was running. However, resize status in InProgress. … Problem-2
  7. The second resize (rollback) for c1 was done in kubelet.
  8. The pod status on API was not updated for more than one minute. This is a known issue (#123940) where nothing triggers a pod worker at resizing without restarting a container.
  9. Then, the pod status on API was updated. In c1, the second resize was completed. c2 looked resized by the first resize request.
  10. After more than one minute again, kubelet started to resize c2 with restarting the container.
  11. Then, the pod status was updated. All container statuses were as expected.

This looks the same result as the test output in the issue description where It took 2-3 minutes till c2 passed the cgroup values verification while c1 passed the verification soon.

hshiina commented 5 months ago

Problem-1

I guess the second resize for c2 was skipped here: https://github.com/kubernetes/kubernetes/blob/f386b4cd4a879e8e7c4c255900a755bd0a61f8f0/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L565

apiContainerStatus was based on the API pod status when the second resize was requested. When it was requested, the old container was still running as c2. On the other hand, kubeContainerStatus was the latest status from the runtime. The new container was already running as c2. So, the container IDs were different.

I don’t think we should refer to API pod status, which may be outdated.

hshiina commented 5 months ago

Problem-2

I guess the pod status degradation was similarly caused. The pod status was overwritten with the old status here: https://github.com/kubernetes/kubernetes/blob/790dfdbe386e4a115f41d38058c127d2dd0e6f44/pkg/kubelet/kubelet.go#L2863 updatedPod was created from an API pod which had container statuses when the second resize request was posted while the first resize was still running. Though the first resize was already completed in kubelet, the pod status was overwritten with the older status.

I wonder if it would be better to pass apiPodStatus which is created with the latest runtime status to handlePodResourcesResize() as an additional argument: https://github.com/kubernetes/kubernetes/blob/790dfdbe386e4a115f41d38058c127d2dd0e6f44/pkg/kubelet/kubelet.go#L1763 https://github.com/kubernetes/kubernetes/blob/790dfdbe386e4a115f41d38058c127d2dd0e6f44/pkg/kubelet/kubelet.go#L1966

haircommander commented 5 months ago

/assign @esotsal

haircommander commented 5 months ago

/triage accepted

esotsal commented 4 months ago

Hi,

I believe we can define "quickly reverting" when c2 kubectl second patch is sent before c2 container is started @hjet do you agree?.

I'm investigating the issue more deeply now and looking at your comments @hshiina , will comment back after i have done more tests in my lab

Below results using the pod specs shared by @hshiina

Delayed reconcile ( >1.5 minutes )

Timestamp SyncLoop
11:19:35.417646 UPDATE (1st patch)
11:19:36.332750 c2 ContainerDied
11:19:36.471802 UPDATE (2nd patch)
11:19:37.337900 c2 ContainerStarted
11:22:02.588342 c2 ContainerDied
11:22:03.591893 c2 ContainerStarted

Normal reconcile ( <2 seconds )

Timestamp SyncLoop
14:17:00.630121 UPDATE (1st patch)
14:17:01.558517 c2 ContainerDied
14:17:02.561038 c2 ContainerStarted
14:17:08.881515 UPDATE (2nd patch)
14:17:09.575167 c2 ContainerDied
14:17:10.579782 c2 ContainerStarted
esotsal commented 4 months ago

Thanks for a very good analysis @hshiina ,

I believe root cause for the very long time varying 1-3 minutes, is the backoff. I've pushed #125757 as an attempt to fix this, please note it will still wait for kubelet's next reconcile loop (interval is 1 minute), but this can be linked with the #123940 known issue. What do you think?

@hjet can you please check if #125757 fixes your issue? Please note you will still need to wait for kubelet's next reconcile loop in your test code.

@hshiina reading your proposal i am not sure if when #123940 is fixed it will be needed or perhaps this proposal should be discussed as part of #123940 ?

hshiina commented 4 months ago

I don't think the problems I raised in this issue are related to #123940. Even if #123940 is solved, I guess a delay of the step 10 in https://github.com/kubernetes/kubernetes/issues/125205#issuecomment-2143530104 will remain.

I don't think this case should be affected by #123940 because the ResizePolicy of the container c2 is RestartContainer. Restarting a container causes PLEG events. Then, resources in container statuses should be updated immediately along with restarting.

I guess my problems are similar to #116970, which information should be referred to, API or runtime.

esotsal commented 4 months ago

Thanks for sharing @hshiina , after reading the description of #116970 added it in SIG Node: In Place Pod Vertical Scaling Project backlog, since it is stated this to be blocker for InPlacePodVerticalScaling moving to beta. /cc @tallclair @vinaykul

Until #116970 issue is resolved, what do you think @hshiina for #125757 ? Is it worth it as a remedy to overcome delays due to backoff for this corner case ? ( i.e. not solving the 1 minute delay but solving the case waiting more than kulelet's next reconcile loop delays due to backoff not been reset )

hshiina commented 4 months ago

I'm not sure that if #125757 is related to this issue yet. However, I think it is worth working on #125757 because restarting along with pod resizing should not cause backoff. It might be better to create another issue for the backoff problem. This can be reproduced with a pod in https://github.com/kubernetes/kubernetes/issues/125205#issuecomment-2143527472.

 for l in `seq 301 310`; do kubectl patch pod resize-pod --patch "{\"spec\":{\"containers\":[{\"name\":\"c2\", \"resources\":{\"limits\":{\"memory\":\"${l}Mi\"}}}]}}"; sleep 3; done
$ kubectl get pod resize-pod -w
NAME         READY   STATUS    RESTARTS   AGE
resize-pod   3/3     Running   0          14s
resize-pod   3/3     Running   0          21s
resize-pod   3/3     Running   1 (1s ago)   22s
resize-pod   3/3     Running   1 (3s ago)   24s
resize-pod   3/3     Running   2 (2s ago)   26s
resize-pod   3/3     Running   2 (3s ago)   27s
resize-pod   2/3     CrashLoopBackOff   2 (1s ago)   29s
resize-pod   2/3     CrashLoopBackOff   2 (2s ago)   30s
resize-pod   2/3     CrashLoopBackOff   2 (5s ago)   33s
resize-pod   2/3     CrashLoopBackOff   2 (6s ago)   34s
resize-pod   2/3     CrashLoopBackOff   2 (8s ago)   36s
resize-pod   2/3     CrashLoopBackOff   2 (11s ago)   39s
resize-pod   3/3     Running            3 (13s ago)   41s
<snip>
hshiina commented 4 months ago

By the way, the bakcoff issue above does not happen on v1.30 or older. Before #124220 was merged, a stable key for the backoff was changed along with a change of a pod resource spec.

https://github.com/kubernetes/kubernetes/blob/a38cde339a30691577e14e662084b873d734b5ba/pkg/kubelet/kuberuntime/helpers.go#L178-L179

esotsal commented 4 months ago

Thanks @hshiina , created https://github.com/kubernetes/kubernetes/issues/125843

hshiina commented 4 months ago

The problems I raised can be reproduced without resizing c1 (ResizePolicy is NotRequired). In addition, the problems can be caused by just resizing a container twice (not only rollback).

pod.yaml:

``` apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: resize-pod name: resize-pod spec: containers: - image: busybox name: resize-container command: - sh - -c - trap "exit 0" SIGTERM; while true; do sleep 1; done resources: requests: cpu: 200m memory: 200Mi limits: cpu: 300m memory: 300Mi resizePolicy: - resourceName: cpu restartPolicy: NotRequired - resourceName: memory restartPolicy: RestartContainer restartPolicy: Always ```

Operation:

kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"resize-container", "resources":{"limits":{"memory":"450Mi"}}}]}}'
sleep 1
kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"resize-container", "resources":{"limits":{"memory":"550Mi"}}}]}}'

Excluding the known issue #123940, the problem in this issue is: If a container whose ResizePolicy is RestartContainer is requested to be resized twice quickly (in about one second), it takes one more minute until the second request is done.

hshiina commented 4 months ago

I have posted #125958 for problems I mentioned.

AnishShah commented 3 days ago

I reproduced this issue as follows:

  1. Enable the InPlacePodVerticalScaling feature gate.
  2. Create a pod which has a container whose resizePolicy is NotRequired and a container whose restartPolicy is RestartContainer:

    apiVersion: v1
    kind: Pod
    metadata:
     creationTimestamp: null
     labels:
       run: resize-pod
     name: resize-pod
    spec:
     containers:
     - image: busybox
       name: c1
       command:
         - sh
         - -c
         - trap "exit 0" SIGTERM; while true; do sleep 1; done
       resources:
         requests:
           cpu: 100m
           memory: 100Mi
         limits:
           cpu: 200m
           memory: 200Mi
       resizePolicy:
       - resourceName: cpu
         restartPolicy: NotRequired
       - resourceName: memory
         restartPolicy: NotRequired
     - image: busybox
       name: c2
       command:
         - sh
         - -c
         - trap "exit 0" SIGTERM; while true; do sleep 1; done
       resources:
         requests:
           cpu: 200m
           memory: 200Mi
         limits:
           cpu: 300m
           memory: 300Mi
       resizePolicy:
       - resourceName: cpu
         restartPolicy: NotRequired
       - resourceName: memory
         restartPolicy: RestartContainer
     - image: busybox
       name: c3
       command:
         - sh
         - -c
         - trap "exit 0" SIGTERM; while true; do sleep 1; done
       resources:
         requests:
           cpu: 300m
           memory: 300Mi
         limits:
           cpu: 400m
           memory: 400Mi
       resizePolicy:
       - resourceName: cpu
         restartPolicy: NotRequired
       - resourceName: memory
         restartPolicy: NotRequired
     restartPolicy: Always
  3. After the pod gets started, post a patch to resize containers. Then, post a patch to rollback the resources before the first resize completes:
    kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"50m","memory":"50Mi"},"limits":{"cpu":"150m","memory":"150Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"350m","memory":"350Mi"},"limits":{"cpu":"450m","memory":"450Mi"}}}]}}'   
    sleep 1 
    kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"100m","memory":"100Mi"},"limits":{"cpu":"200m","memory":"200Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"200m","memory":"200Mi"},"limits":{"cpu":"300m","memory":"300Mi"}}}]}}'

Then, it takes about three minutes to complete the rollback.

@esotsal : With resize subresource, we no longer allow resize via normal pod patch. We need to pass --subresource=resize flag to kubectl patch. #128296 added support for resize subresource in kubectl. You will need to create a new kubectl binary that contains #128296.

AnishShah commented 3 days ago

New Steps:

  1. rebase k8s repo to fetch commits from #128296
  2. make WHAT="cmd/kubectl" to create a new kubectl binary.
  3. Use <path to the new kubectl binary> patch pod .... --subresource=resize to resize pods
esotsal commented 3 days ago

New Steps:

1. rebase k8s repo to fetch commits from [[FG:InPlacePodVerticalScaling] Remove restrictions on subresource flag in kubectl commands #128296](https://github.com/kubernetes/kubernetes/pull/128296)

2. `make WHAT="cmd/kubectl"` to create a new kubectl binary.

3. Use `<path to the new kubectl binary> patch pod .... --subresource=resize` to resize pods

Thanks @AnishShah , will take a look on it.

esotsal commented 2 days ago

New Steps:

1. rebase k8s repo to fetch commits from [[FG:InPlacePodVerticalScaling] Remove restrictions on subresource flag in kubectl commands #128296](https://github.com/kubernetes/kubernetes/pull/128296)

2. `make WHAT="cmd/kubectl"` to create a new kubectl binary.

3. Use `<path to the new kubectl binary> patch pod .... --subresource=resize` to resize pods

Thanks @AnishShah , will take a look on it.

Worked fine, using following command.

./kubectl patch pod resize-pod --subresource='resize' -p '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"50m","memory":"50Mi"},"limits":{"cpu":"150m","memory":"150Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"350m","memory":"350Mi"},"limits":{"cpu":"450m","memory":"450Mi"}}}]}}' 
sleep 1
./kubectl patch pod resize-pod --subresource='resize' -p '{"spec":{"containers":[{"name":"c1", "resources":{"requests":{"cpu":"100m","memory":"100Mi"},"limits":{"cpu":"200m","memory":"200Mi"}}},{"name":"c2", "resources":{"requests":{"cpu":"200m","memory":"200Mi"},"limits":{"cpu":"300m","memory":"300Mi"}}}]}}
esotsal commented 2 days ago

/close

@hjet screencast demonstrating that issue is fixed now (following steps to reproduce), can be found here

k8s-ci-robot commented 2 days ago

@esotsal: Closing this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/125205#issuecomment-2463476541): >/close > >@hjet screencast demonstrating that issue is fixed now (following steps to reproduce), can be found [here](https://github.com/kubernetes/kubernetes/pull/125757#issuecomment-2462698830) Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.