karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.11k stars 805 forks source link

How to use HPA in member cluster #3458

Open fusongke100 opened 1 year ago

fusongke100 commented 1 year ago

karmada version :1.5.0 Kubernetes version : 1.21.0

Hello, I want to use HPA in the member cluster ,so when I set Configure Controllers as below , the propagation policy is invalidate. image

How can I use HPA and do not influence the propagation policy.

RainbowMango commented 1 year ago

the propagation policy is invalidate.

Please try --controllers=*,hpa

By the way, we are working on a new proposal about Multi-cluster HPA at #3161. The current hpa controller might be refactored/deprecated in the future.

fusongke100 commented 1 year ago

Thank you ,let me try.

fusongke100 commented 1 year ago

change config to --controllers=*,hpa , the propagation policy can work well, but the HPA is not available. when up to the metric of HPA and start two pods , they shutdown immediately. the deploy`s replicas =1

image

RainbowMango commented 1 year ago

That is because Karmada will override the replica after HPA is scaled up. You can try to customize the behavior by following yaml:

apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: declarative-configuration-example
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          desiredObj.spec.replicas = observedObj.spec.replicas
          return desiredObj
        end

See Customizing Resource Interpreter for more details.

fusongke100 commented 12 months ago

Could you please give an example to show how to use ResourceInterpreterCustomization to support HPA.

RainbowMango commented 12 months ago

I don't think you need to use ResourceInterpreterCustomization to customize HPA. If the .spec.scaleTargetRef of HPA is Deployment, you have to customize Deployment with resource interpreter as mentioned above.

If you can describe your requirements in detail, we may be able to give a more detailed solution.

fusongke100 commented 11 months ago

Yes, the HPA yaml file like this, please give a more detailed solution. image

RainbowMango commented 11 months ago

Please share the PropagationPolicy as well.

fusongke100 commented 11 months ago

This is the PropagationPolicy: image

RainbowMango commented 11 months ago

@jwcesign Please help to take a look.

jwcesign commented 11 months ago

Hi, @fusongke100 , from the HPA yaml and the PropagationPolicy yaml.

I think the solution could be:

  1. Create one file called resource-interpreter-customization.yaml:

    apiVersion: config.karmada.io/v1alpha1
    kind: ResourceInterpreterCustomization
    metadata:
    name: declarative-configuration-example
    spec:
    target:
    apiVersion: apps/v1
    kind: Deployment
    customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          desiredObj.spec.replicas = observedObj.spec.replicas
          return desiredObj
        end
  2. Apply it to karmada control plane.

  3. Try to request the deployment and verify if it will be scaled up.

fusongke100 commented 11 months ago

Thanks a lot, I will try!

fusongke100 commented 11 months ago

I have tested it, it work well. but if the propagation policy like this, it does not work. image

jwcesign commented 11 months ago

Can you show me the following information. 1.The ResoruceBinding of this deployment. 2.The result of karmadactl get deployment 3.The result of karmadactl get hpa

By the way, please delete the HPA in spec.resourceSelectors, the HPA will be propagated by controller.

fusongke100 commented 11 months ago

If delete the HPA in spec.resourceSelectors , how two set targetCPUUtilizationPercentage

jwcesign commented 11 months ago

targetCPUUtilizationPercentage is set in: image

And the component karmada-controller-manager will sync it to the member clusters with the same template yaml.

So just set targetCPUUtilizationPercentage as the value you want.

Do I understand your question correctly?

If not, what behavior do you expect?

fusongke100 commented 11 months ago

Can you show me the following information. 1.The ResoruceBinding of this deployment. 2.The result of karmadactl get deployment 3.The result of karmadactl get hpa

By the way, please delete the HPA in spec.resourceSelectors, the HPA will be propagated by controller.

I mean, If delete the HPA in spec.resourceSelectors , the HPA resource can not propagation to member cluster.

jwcesign commented 11 months ago

Hi, @fusongke100 If you enable hpa controller by setting the parameter --controllers=*,hpa in component karmada-controller-manager, the HPA resource will be propagated to the member cluster.

fusongke100 commented 11 months ago

Another question,why propagation policy set the replicaScheduling ,the HPA can not work well.

jwcesign commented 11 months ago

For HPA, If propagate it with policy, the replicas couldn't be parsed, so the next steps couldn't be processed(propagate to member clusters). But if you didn't set replicaScheduling, the replicas parse logic will not be executed(default is duplication), and directly sync the original yaml to member clusters.

By the way, with replicaScheduling configuration, did you set the parameter --controllers=*,hpa? If set, it should work

fusongke100 commented 11 months ago

Yes ,I have set --controllers=*,hpa , and if the propagation policy set the replicaScheduling ,the ResourceInterpreterCustomization can not work well. I don`t know the reason and how to solve this problem.

jwcesign commented 11 months ago

Hi, @fusongke100 I tested in my environment, it works fine:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
          limits:
            cpu: 0.5
          requests:
            cpu: 0.5
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
    - apiVersion: autoscaling/v1
      kind: HorizontalPodAutoscaler
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
    clusterTolerations:
      - effect: NoExecute
        key: fail-test
        operator: Exists
        tolerationSeconds: 10
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - member2
            weight: 1
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 30
---
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: declarative-configuration-example
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          desiredObj.spec.replicas = observedObj.spec.replicas
          return desiredObj
        end

When watch pods in member clusters, the pods could be scaled up:

root@karmada [10:33:52 AM] [~]
-> # k get pods --watch
NAME                     READY   STATUS    RESTARTS   AGE
nginx-79c955657f-2dbs9   1/1     Running   0          85s
nginx-79c955657f-6q2sz   1/1     Running   0          11s
nginx-79c955657f-pmp74   1/1     Running   0          10s

Can you show me your rb?

k get rb -A
fusongke100 commented 11 months ago

When replicaScheduling weight is equal , it can work, when not equal, it seems like can not work. could you please test this scenario?

jwcesign commented 11 months ago

Hi, @fusongke100 It still works file with following yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
          limits:
            cpu: 0.5
          requests:
            cpu: 0.5
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
    - apiVersion: autoscaling/v2
      kind: HorizontalPodAutoscaler
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
    clusterTolerations:
      - effect: NoExecute
        key: fail-test
        operator: Exists
        tolerationSeconds: 10
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - member2
            weight: 2
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 30
---
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: declarative-configuration-example
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          desiredObj.spec.replicas = observedObj.spec.replicas
          return desiredObj
        end

When the request comes, the output:

root@karmada [03:57:54 PM] [~/workspace/git]
-> # karmadactl get pods --watch
NAME                     CLUSTER   READY   STATUS    RESTARTS   AGE
nginx-79c955657f-9k9hg   member2   1/1     Running   0          2m29s
nginx-79c955657f-d2p79   member2   1/1     Running   0          2m29s
nginx-79c955657f-ddb8z   member1   1/1     Running   0          2m29s

nginx-79c955657f-hgfmp   member1   0/1   Pending   0     0s
nginx-79c955657f-hgfmp   member1   0/1   Pending   0     0s
nginx-79c955657f-2cbc5   member1   0/1   Pending   0     0s
nginx-79c955657f-2cbc5   member1   0/1   Pending   0     0s
nginx-79c955657f-hgfmp   member1   0/1   ContainerCreating   0     0s
nginx-79c955657f-2cbc5   member1   0/1   ContainerCreating   0     0s
nginx-79c955657f-hgfmp   member1   1/1   Running             0     3s
nginx-79c955657f-2cbc5   member1   1/1   Running             0     5s

So the replicas could be scaled up and the replicas will be retained.

I think for your scenario, just deleting spec.resourceSelectors is enough.