Open fusongke100 opened 1 year ago
the propagation policy is invalidate.
Please try --controllers=*,hpa
By the way, we are working on a new proposal about Multi-cluster HPA at #3161. The current hpa
controller might be refactored/deprecated in the future.
Thank you ,let me try.
change config to --controllers=*,hpa
, the propagation policy can work well, but the HPA is not available.
when up to the metric of HPA and start two pods , they shutdown immediately. the deploy`s replicas =1
That is because Karmada will override the replica after HPA is scaled up. You can try to customize the behavior by following yaml:
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-example
spec:
target:
apiVersion: apps/v1
kind: Deployment
customizations:
retention:
luaScript: >
function Retain(desiredObj, observedObj)
desiredObj.spec.replicas = observedObj.spec.replicas
return desiredObj
end
See Customizing Resource Interpreter for more details.
Could you please give an example to show how to use ResourceInterpreterCustomization
to support HPA
.
I don't think you need to use ResourceInterpreterCustomization
to customize HPA
. If the .spec.scaleTargetRef
of HPA
is Deployment, you have to customize Deployment
with resource interpreter as mentioned above.
If you can describe your requirements in detail, we may be able to give a more detailed solution.
Yes, the HPA yaml file like this, please give a more detailed solution.
Please share the PropagationPolicy as well.
This is the PropagationPolicy:
@jwcesign Please help to take a look.
Hi, @fusongke100 , from the HPA yaml and the PropagationPolicy yaml.
I think the solution could be:
Create one file called resource-interpreter-customization.yaml
:
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-example
spec:
target:
apiVersion: apps/v1
kind: Deployment
customizations:
retention:
luaScript: >
function Retain(desiredObj, observedObj)
desiredObj.spec.replicas = observedObj.spec.replicas
return desiredObj
end
Apply it to karmada control plane.
Try to request the deployment and verify if it will be scaled up.
Thanks a lot, I will try!
I have tested it, it work well. but if the propagation policy
like this, it does not work.
Can you show me the following information. 1.The ResoruceBinding of this deployment. 2.The result of karmadactl get deployment 3.The result of karmadactl get hpa
By the way, please delete the HPA in spec.resourceSelectors
, the HPA will be propagated by controller.
If delete the HPA in spec.resourceSelectors
, how two set targetCPUUtilizationPercentage
targetCPUUtilizationPercentage
is set in:
And the component karmada-controller-manager
will sync it to the member clusters with the same template yaml.
So just set targetCPUUtilizationPercentage
as the value you want.
Do I understand your question correctly?
If not, what behavior do you expect?
Can you show me the following information. 1.The ResoruceBinding of this deployment. 2.The result of karmadactl get deployment 3.The result of karmadactl get hpa
By the way, please delete the HPA in
spec.resourceSelectors
, the HPA will be propagated by controller.
I mean, If delete the HPA in spec.resourceSelectors
, the HPA resource can not propagation to member cluster.
Hi, @fusongke100
If you enable hpa controller by setting the parameter --controllers=*,hpa
in component karmada-controller-manager
, the HPA resource will be propagated to the member cluster.
Another question,why propagation policy
set the replicaScheduling
,the HPA can not work well.
For HPA, If propagate it with policy, the replicas couldn't be parsed, so the next steps couldn't be processed(propagate to member clusters).
But if you didn't set replicaScheduling
, the replicas parse logic will not be executed(default is duplication), and directly sync the original yaml to member clusters.
By the way, with replicaScheduling
configuration, did you set the parameter --controllers=*,hpa
? If set, it should work
Yes ,I have set --controllers=*,hpa
, and if the propagation policy
set the replicaScheduling
,the ResourceInterpreterCustomization
can not work well. I don`t know the reason and how to solve this problem.
Hi, @fusongke100 I tested in my environment, it works fine:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
limits:
cpu: 0.5
requests:
cpu: 0.5
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
- apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
placement:
clusterAffinity:
clusterNames:
- member1
- member2
clusterTolerations:
- effect: NoExecute
key: fail-test
operator: Exists
tolerationSeconds: 10
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 1
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 30
---
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-example
spec:
target:
apiVersion: apps/v1
kind: Deployment
customizations:
retention:
luaScript: >
function Retain(desiredObj, observedObj)
desiredObj.spec.replicas = observedObj.spec.replicas
return desiredObj
end
When watch pods in member clusters, the pods could be scaled up:
root@karmada [10:33:52 AM] [~]
-> # k get pods --watch
NAME READY STATUS RESTARTS AGE
nginx-79c955657f-2dbs9 1/1 Running 0 85s
nginx-79c955657f-6q2sz 1/1 Running 0 11s
nginx-79c955657f-pmp74 1/1 Running 0 10s
Can you show me your rb?
k get rb -A
When replicaScheduling
weight is equal , it can work, when not equal, it seems like can not work. could you please test this scenario?
Hi, @fusongke100 It still works file with following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
limits:
cpu: 0.5
requests:
cpu: 0.5
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
- apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
placement:
clusterAffinity:
clusterNames:
- member1
- member2
clusterTolerations:
- effect: NoExecute
key: fail-test
operator: Exists
tolerationSeconds: 10
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 2
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 30
---
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-example
spec:
target:
apiVersion: apps/v1
kind: Deployment
customizations:
retention:
luaScript: >
function Retain(desiredObj, observedObj)
desiredObj.spec.replicas = observedObj.spec.replicas
return desiredObj
end
When the request comes, the output:
root@karmada [03:57:54 PM] [~/workspace/git]
-> # karmadactl get pods --watch
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-79c955657f-9k9hg member2 1/1 Running 0 2m29s
nginx-79c955657f-d2p79 member2 1/1 Running 0 2m29s
nginx-79c955657f-ddb8z member1 1/1 Running 0 2m29s
nginx-79c955657f-hgfmp member1 0/1 Pending 0 0s
nginx-79c955657f-hgfmp member1 0/1 Pending 0 0s
nginx-79c955657f-2cbc5 member1 0/1 Pending 0 0s
nginx-79c955657f-2cbc5 member1 0/1 Pending 0 0s
nginx-79c955657f-hgfmp member1 0/1 ContainerCreating 0 0s
nginx-79c955657f-2cbc5 member1 0/1 ContainerCreating 0 0s
nginx-79c955657f-hgfmp member1 1/1 Running 0 3s
nginx-79c955657f-2cbc5 member1 1/1 Running 0 5s
So the replicas could be scaled up and the replicas will be retained.
I think for your scenario, just deleting spec.resourceSelectors
is enough.
Currently, I'm aware of the federatedhpa, but if I still want to use separate cluster HPAs to control replicas in each member cluster. By creating a deployment with a label "resourcetemplate.karmada.io/retain-replicas", will it work? Because I think ResourceInterpreterCustomization is a little complex.
Hi @vie-serendipity
but if I still want to use separate cluster HPAs to control replicas in each member cluster
Separate cluster HPAs also works, but you should pay attention to that hpaScaleTargetMarker
and deploymentReplicasSyncer
is enabled in karmada-controller-manager, just like:
command:
- /bin/karmada-controller-manager
...
- --controllers=*,hpaScaleTargetMarker,deploymentReplicasSyncer
...
- --v=4
By creating a deployment with a label "resourcetemplate.karmada.io/retain-replicas"
No need, the hpaScaleTargetMarker
controller can automatically add this label to deployments which is controllered by hpa.
@chaosi-zju thanks, that's pretty cool.
karmada version :1.5.0 Kubernetes version : 1.21.0
Hello, I want to use HPA in the member cluster ,so when I set Configure Controllers as below , the propagation policy is invalidate.
How can I use HPA and do not influence the propagation policy.