Closed MikaelFDA closed 2 weeks ago
thanks for opening @MikaelFDA - would you mind sharing the rendered prefect-server deploy and HPA manifests?
hey @MikaelFDA - i just noticed the resource requests you've set for your prefect-server
deployment. it looks like your memory
requests are lower than you might want it
resources:
requests:
memory: 256Mi
limits:
memory: 512Mi
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Utilization is the ratio between the current usage of resource to the
requested
resources of the pod.
another hint is that the ordering of your TARGETS
in the HPA describe should be showing you the memory utilization / cpu utilization (in that order), bc that's how we define it in the HPA
https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-server/templates/hpa.yaml#L22-L38
in other words:
256Mi
of memory prefect-server
pods are using ~328Mi
I'd suggest requesting more memory your deployment - for ex., our default values set this at 512Mi
@parkedwards Indeed it's the cause of my issue. Didn't think about that because on my others services created with helm, the order is CPU/RAM :facepalm:.
I guess it's never too late to learn the intricacies of Kubernetes :laughing: Thank for helping !
However, on k8s docs (like here) it's the order CPU/RAM. If it's a standard, I think it would be good idea to update the chart
@MikaelFDA good call out. I will switch the order of them now, to avoid confusion 😄 . thanks again for flagging this!
cluster: AKS kubernetes:
1.27.9
prefect-version:2.19.2
chart version:prefect-server-2024.5.23194919
My values.yaml is configured as follow :
When I look at the HPA, this is what I see :
I tried to find out why the autoscaller says it's 127% but I cant find teh reason. On the same pool I have prefect-worker with HPA who doesn't have this problem.
Do you have any idea why it's happening ? Is it a bug with the chart or something with my cluster ?