karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.48k stars 888 forks source link

'kubectl rollout restart' on member cluster don't restart deploy #5120

Open Patrick0308 opened 4 months ago

Patrick0308 commented 4 months ago

What happened: kubectl rollout restart deploy xxxx is not work as expected. The new pod will be terminated soon.

image

What you expected to happen: Restart deploy successfully

How to reproduce it (as minimally and precisely as possible): kubectl rollout restart deploy xxxx

Anything else we need to know?:

Environment:

XiShanYongYe-Chang commented 4 months ago

Hi @Patrick0308, the rollout restart of a deployment relies on the creation of a new ReplicaSet to achieve this, but in Karmada, the deployment-controller is not enabled, hence no ReplicaSet will be created.

Additionally, I would like to know where you query the pod resources?

Patrick0308 commented 4 months ago

@XiShanYongYe-Chang I rollout restart deploy on member cluster.

XiShanYongYe-Chang commented 4 months ago

Thanks for explaining!

There may be a problem here. You can check that the changes to the Deployment on the member cluster maybe overwritten by the Karmada control plane.

Patrick0308 commented 4 months ago

kubectl restart deploy by adding "kubectl.kubernetes.io/restartedAt" annotation. So I create this ResourceInterpreterCustomization to solve this issue temporarily.

apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: deployment-restart
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
            desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
          end
          return desiredObj
        end

Maybe we should revise the default ResourceInterpreterCustomization?

XiShanYongYe-Chang commented 4 months ago

I'm not sure it's appropriate to add that logic to the default interpreter.

Ask more people to help give some opinions.

/cc @RainbowMango @chaunceyjiang @whitewindmills

Patrick0308 commented 4 months ago

The related question is that should we support like server side apply feature to allow other controller or client to modify?

XiShanYongYe-Chang commented 4 months ago

The retain resource interpreter is designed to leave this choice up to the user.

whitewindmills commented 4 months ago

Maybe we should revise the default ResourceInterpreterCustomization?

first, I agree that this logic is supported in the default interpreter, cause the administrator has not made any changes to .spec, and Karmada has no reason to prevent the administrator from using the kubectl rollout restart deployments ... command to restart the Deployment. in addition, I'm not sure if there are other similar situations that we can solve together.

The related question is that should we support like server side apply feature to allow other controller or client to modify?

yes, I think we have supported that, that is retain-interpreter.

XiShanYongYe-Chang commented 4 months ago

cause the administrator has not made any changes to .spec,

In fact, kubectl rollout restart updates the Deployment template content, refer to https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 That's why karmada control plane is re-covered.

I'm wondering if this scenario needs to be handled by the default resource interpreter.

Hi @Patrick0308 Do you use this a lot in production?

Patrick0308 commented 4 months ago

Do you use this a lot in production?

I think sometimes administrator want to restart the deployment of only one cluster. For example when one member cluster secret which be mounted to the deployment is updated by secret management.

The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue. Please see pr #5128 which let do kubectl rollout restart command where you like.

XiShanYongYe-Chang commented 4 months ago

Thanks for your user story, I think it represents a meaningful use.

The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue.

Is it a inconvenience to use custom interpreters? I'd like to know from the user's point of view. Because we designed it to be better for users to scale.

Patrick0308 commented 4 months ago

The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue.

        function Retain(desiredObj, observedObj)
          if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
            desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
          end
          return desiredObj
        end

I mean this code will make doing "kubectl rollout restart deploy xxx" on karmada cluster not work. It's not best way.

XiShanYongYe-Chang commented 4 months ago

Oh, the logic of this script is not correct. It should keep the latest field in the member cluster, just like the way you described in #5128.

RainbowMango commented 4 months ago

A question is what if both the Karmada side and member cluster have the annotation kubectl.kubernetes.io/restartedAt? I mean a user performs kubectl rollout restart on Karmada and member cluster.

chaunceyjiang commented 4 months ago

deployment.kubernetes.io/revision Should this also be Retain?