voyagermesh / voyager

🚀 Secure L7/L4 (HAProxy) Ingress Controller for Kubernetes
https://voyagermesh.com
Apache License 2.0
1.35k stars 134 forks source link

Understanding/Documenting CPU Usage, behaviour and limits. #1267

Closed davinchia closed 6 years ago

davinchia commented 6 years ago

Hi,

First, thanks for all the work! I'm currently running 5.0 with excess of 100k RPS and things have been relatively stable for a while not (besides deploys, which is a known problem).

Right now we are looking to upgrade to 7.4 and running tests. I'm hoping to understand the relationship between Voyager performance and pod cpu usage. I've noticed our pods go up to 800% while serving traffic seemingly without any problems. In a traditional deploy, one would scale the deploy as soon as average cpu reaches 70 - 80%. What does high cpu represent for a Voyager pod? Should I be scaling those up? If it does represent a resource issue, it is possible to specify higher resource limits, since having to manage a ton of pods is unwieldy at best. Been looking around and there doesn't seem to be documentation on this.

Thanks.

tamalsaha commented 6 years ago

I am guessing that you are talking about resource usage by Ingress / HAProxy pods here. If not, please clarify.

You can use HPA to scale up the HAProxy pods. https://github.com/appscode/voyager/blob/master/docs/guides/ingress/scaling.md

HPA can work with Pod cpu usage metrics. So, you can use that to scale up the deployment size. You can also use custom metrics with HPA.

You can specify resource limits using the ingress.spec.resources. https://github.com/appscode/voyager/blob/master/apis/voyager/v1beta1/ingress.go#L85 . These will be passed to HAproxy pods.

Please let me know if you have more questions.

davinchia commented 6 years ago

Wow, thanks for the quick reply.

Yes I am asking about Voyager pod resource usage. Gotcha. So this means the Voyager pods should be autoscaled according to usual practices? i.e. 80% cpu usage should trigger a scale up. Are there best practices? Gotchas we should be aware of?

Perfect. Thanks for the Ingress link. I'll give that a shot. It seems the default value is 0.1 CPU per pod. Is that right?

tamalsaha commented 6 years ago

So this means the Voyager pods should be autoscaled according to usual practices?

Yes. No gotchas other than the usual ones when you deploy new pods.

It seems the default value is 0.1 CPU per pod. Is that right?

We don't set any value. So, I think this is the default limit-rage for your cluster.

davinchia commented 6 years ago

Awesome. Thanks! Closing issue.

davinchia commented 6 years ago

A few quick questions:

1) My pods don't seem to be able to utilise more than 1 vCPU. Is Voyager single threaded? Is 1 vCPU the limit or can Voyager use more cores in theory?

2) The memory requirements seem outrageously low. I'm seeing approx 10 mb usage. Is this expected?

3) The pods seem to be CPU bound. Is this expected?

tamalsaha commented 6 years ago

If you want to use multiple cpus per HAProxy pod, some customization will be needed for the HAProxy template.

The alternative will be to just run separate pods with vcpu=1 per pod.

The memory requirements seem outrageously low. I'm seeing approx 10 mb usage. Is this expected?

Yes.

The pods seem to be CPU bound. Is this expected?

This is difficult to answer in general. Under heavy load, it might be cpu bound, since it only uses one vcpu.

davinchia commented 6 years ago

Excellent. Thanks again!

binnythomas-1989 commented 4 years ago

@tamalsaha or @davinchia Could you confirm if the below annotations would work for adding resource limits to voyager which i could see on containers of the Deployment Resource for voyager.

ingress.spec.resources: | { "resources": { "limits": { "cpu": "1", "memory": "512Mi" } }

Reason is i would require to add the resource limits for hpa to work properly. Else it shows metrics error

Warning FailedComputeMetricsReplicas 6s (x7 over 96s) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory

Kindly help, since the above annotations was not successful. There are 2 containers one is haproxy and the other is exporter

binnythomas-1989 commented 4 years ago

I have to update i understand what you meant by the comment

spec:
  resources:
    limits:
      cpu: 1
      memory: 512Mi
    requests:
      cpu: 1
      memory: 512Mi

adding the resource limits on the specs worked but that doesnt solve the whole problem, it requires to be added on the exporter container too. Is there a way we could do that? if not added we still get the error.

Please find the template for hpa below

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
  name: abc-voyager
  namespace: abc

spec:
  maxReplicas: 6
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: voyager-abc-appscode
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 70