Open Patrick0308 opened 4 months ago
Hi @Patrick0308, the rollout restart of a deployment relies on the creation of a new ReplicaSet to achieve this, but in Karmada, the deployment-controller is not enabled, hence no ReplicaSet will be created.
Additionally, I would like to know where you query the pod resources?
@XiShanYongYe-Chang I rollout restart deploy on member cluster.
Thanks for explaining!
There may be a problem here. You can check that the changes to the Deployment on the member cluster maybe overwritten by the Karmada control plane.
kubectl restart deploy by adding "kubectl.kubernetes.io/restartedAt" annotation. So I create this ResourceInterpreterCustomization to solve this issue temporarily.
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: deployment-restart
spec:
target:
apiVersion: apps/v1
kind: Deployment
customizations:
retention:
luaScript: >
function Retain(desiredObj, observedObj)
if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
end
return desiredObj
end
Maybe we should revise the default ResourceInterpreterCustomization?
I'm not sure it's appropriate to add that logic to the default interpreter.
Ask more people to help give some opinions.
/cc @RainbowMango @chaunceyjiang @whitewindmills
The related question is that should we support like server side apply feature to allow other controller or client to modify?
The retain resource interpreter is designed to leave this choice up to the user.
Maybe we should revise the default ResourceInterpreterCustomization?
first, I agree that this logic is supported in the default interpreter, cause the administrator has not made any changes to .spec
, and Karmada has no reason to prevent the administrator from using the kubectl rollout restart deployments ...
command to restart the Deployment.
in addition, I'm not sure if there are other similar situations that we can solve together.
The related question is that should we support like server side apply feature to allow other controller or client to modify?
yes, I think we have supported that, that is retain-interpreter.
cause the administrator has not made any changes to .spec,
In fact, kubectl rollout restart
updates the Deployment template content, refer to https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 That's why karmada control plane is re-covered.
I'm wondering if this scenario needs to be handled by the default resource interpreter.
Hi @Patrick0308 Do you use this a lot in production?
Do you use this a lot in production?
I think sometimes administrator want to restart the deployment of only one cluster. For example when one member cluster secret which be mounted to the deployment is updated by secret management.
The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue. Please see pr #5128 which let do kubectl rollout restart
command where you like.
Thanks for your user story, I think it represents a meaningful use.
The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue.
Is it a inconvenience to use custom interpreters? I'd like to know from the user's point of view. Because we designed it to be better for users to scale.
The retain interceptor of https://github.com/karmada-io/karmada/issues/5120#issuecomment-2204944125 is not a best way to solve this issue.
function Retain(desiredObj, observedObj)
if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
end
return desiredObj
end
I mean this code will make doing "kubectl rollout restart deploy xxx" on karmada cluster not work. It's not best way.
Oh, the logic of this script is not correct. It should keep the latest field in the member cluster, just like the way you described in #5128.
A question is what if both the Karmada
side and member cluster
have the annotation kubectl.kubernetes.io/restartedAt
?
I mean a user performs kubectl rollout restart
on Karmada and member cluster.
deployment.kubernetes.io/revision
Should this also be Retain
?
What happened:
kubectl rollout restart deploy xxxx
is not work as expected. The new pod will be terminated soon.What you expected to happen: Restart deploy successfully
How to reproduce it (as minimally and precisely as possible):
kubectl rollout restart deploy xxxx
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):