Closed hydrapolic closed 3 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
can you show helm -n ingress-nginx get values ingress-nginx
@Gacko if I stay on v1.11 and go from default install to enabling hpa, then no problems so looks like a upgrade only scene.
enabled
is not a string, it's a boolean. Please use a boolean. We also have unit tests for that, so I doubt it's broke.
/remove-kind bug
I can NOT reproduce error while having installed v1.11.1 without autoscaling enabled and then making a change via helm to enable autoscaling.
So the config itself is not a problem.
/kind support
Like @Gacko mentioned, it could be the config value not being boolean to begin with.
@hydrapolic please provide details around the issue and state to base some comments on
/triage needs-information
Thank you for the help, it really was the problem with string vs boolean in autoscaling.enabled
. It worked before, but seems like the changes made it problematic now.
This now works fine:
helm get values ingress-nginx
USER-SUPPLIED VALUES:
controller:
autoscaling:
enabled: true
maxReplicas: 4
minReplicas: 2
targetCPUUtilizationPercentage: 90
targetMemoryUtilizationPercentage: 90
In terraform:
{
type = "string"
name = "controller.autoscaling.enabled"
value = "true"
}
->
{
type = "auto"
name = "controller.autoscaling.enabled"
value = true
}
And no keda config is needed at all.
Thanks @longwuyuan and @Gacko
What happened:
After upgrading the controller from
1.10.1
->1.11.1
and the chart from4.10.0
->4.11.1
, hpa stopped working.This hpa config worked before:
Now this error was shown then trying to update the controller/chart:
After adding:
It applies the config, however HPA is disabled.
What you expected to happen:
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Client Version: v1.29.6 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.6-gke.1038001
Environment:
Cloud provider or hardware configuration: GCP
OS (e.g. from /etc/os-release): Container-Optimized OS with containerd (cos_containerd)
Kernel (e.g.
uname -a
):Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed: Terraform with helm chart (helm_release resource)
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know:
Maybe related to https://github.com/kubernetes/ingress-nginx/pull/11110