kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.56k stars 8.27k forks source link

nginx-ingress controller with autoscaler enabled, immediately scale up to maximum replicas amount #10178

Open cvallesi-kainos opened 1 year ago

cvallesi-kainos commented 1 year ago

What happened:

Autoscaling seems to scale to maximum capacity as soon as the ingress controller is deployed.

What you expected to happen:

Not seeing the ingress scale immediately.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller Release: v1.8.1 Build: dc88dce9ea5e700f3301d16f971fa17c6cfe757d Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6


Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:21:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"94c50547e633f1db5d4c56b2b305670e14987d59", GitTreeState:"clean", BuildDate:"2023-06-12T18:46:30Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

NAME                                STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-agentpool-15100971-vmss000000   Ready    agent   151m   v1.25.6   10.224.0.6    <none>        Ubuntu 22.04.2 LTS   5.15.0-1040-azure   containerd://1.7.1+azure-1
nginx-ingress   default         10               2023-07-05 11:06:13.432252818 +0100 BST deployed        ingress-nginx-4.7.0     1.8.0
NAMESPACE         NAME                                                          READY   STATUS    RESTARTS   AGE    IP            NODE                                NOMINATED NODE   READINESS GATES
calico-system     pod/calico-kube-controllers-684bbcff79-26pcn                  1/1     Running   0          135m   10.244.2.10   aks-agentpool-15100971-vmss000000   <none>           <none>
calico-system     pod/calico-node-lq2sj                                         1/1     Running   0          159m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
calico-system     pod/calico-typha-59f86d8879-wst8h                             1/1     Running   0          135m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
default           pod/nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv   1/1     Running   0          84m    10.244.2.89   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/cloud-node-manager-kgcjh                                  1/1     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-autoscaler-69b7556b86-sprkt                       1/1     Running   0          135m   10.244.2.11   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-fb6b9d95f-bc6vz                                   1/1     Running   0          135m   10.244.2.9    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-fb6b9d95f-qgmkv                                   1/1     Running   0          134m   10.244.2.12   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/csi-azuredisk-node-n57j7                                  3/3     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/csi-azurefile-node-d7nb8                                  3/3     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/konnectivity-agent-694c59778-fhd2g                        1/1     Running   0          153m   10.244.2.3    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/konnectivity-agent-694c59778-xfxh5                        1/1     Running   0          153m   10.244.2.2    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/kube-proxy-gnppn                                          1/1     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/metrics-server-67db6db9b5-dvvvq                           2/2     Running   0          131m   10.244.2.13   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/metrics-server-67db6db9b5-lc9nf                           2/2     Running   0          131m   10.244.2.14   aks-agentpool-15100971-vmss000000   <none>           <none>
tigera-operator   pod/tigera-operator-6db9d9c5d9-72mg5                          1/1     Running   0          135m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>

NAMESPACE       NAME                                                       TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
calico-system   service/calico-kube-controllers-metrics                    ClusterIP      10.0.182.56   <none>        9094/TCP                     158m   k8s-app=calico-kube-controllers
calico-system   service/calico-typha                                       ClusterIP      10.0.60.210   <none>        5473/TCP                     159m   k8s-app=calico-typha
default         service/kubernetes                                         ClusterIP      10.0.0.1      <none>        443/TCP                      161m   <none>
default         service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.71.84    20.26.39.76   80:31354/TCP,443:32267/TCP   129m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
default         service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.206.31   <none>        443/TCP                      129m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
kube-system     service/kube-dns                                           ClusterIP      10.0.0.10     <none>        53/UDP,53/TCP                160m   k8s-app=kube-dns
kube-system     service/metrics-server                                     ClusterIP      10.0.4.149    <none>        443/TCP                      160m   k8s-app=metrics-server

NAMESPACE       NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE    CONTAINERS
        IMAGES
          SELECTOR
calico-system   daemonset.apps/calico-node                  1         1         1       1            1           kubernetes.io/os=linux     159m   calico-node
        mcr.microsoft.com/oss/calico/node:v3.24.0
          k8s-app=calico-node
calico-system   daemonset.apps/calico-windows-upgrade       0         0         0       0            0           kubernetes.io/os=windows   159m   calico-windows-upgrade
        mcr.microsoft.com/oss/calico/windows-upgrade:v3.24.0
          k8s-app=calico-windows-upgrade
kube-system     daemonset.apps/cloud-node-manager           1         1         1       1            1           <none>                     160m   cloud-node-manager
        mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.15
          k8s-app=cloud-node-manager
kube-system     daemonset.apps/cloud-node-manager-windows   0         0         0       0            0           <none>                     160m   cloud-node-manager
        mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.15
          k8s-app=cloud-node-manager-windows
kube-system     daemonset.apps/csi-azuredisk-node           1         1         1       1            1           <none>                     160m   liveness-probe,node-driver-registrar,azuredisk   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.5   app=csi-azuredisk-node
kube-system     daemonset.apps/csi-azuredisk-node-win       0         0         0       0            0           <none>                     160m   liveness-probe,node-driver-registrar,azuredisk   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.5   app=csi-azuredisk-node-win
kube-system     daemonset.apps/csi-azurefile-node           1         1         1       1            1           <none>                     160m   liveness-probe,node-driver-registrar,azurefile   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.24.2   app=csi-azurefile-node
kube-system     daemonset.apps/csi-azurefile-node-win       0         0         0       0            0           <none>                     160m   liveness-probe,node-driver-registrar,azurefile   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.24.2   app=csi-azurefile-node-win
kube-system     daemonset.apps/kube-proxy                   1         1         1       1            1           <none>                     160m   kube-proxy
        mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.25.6-hotfix.20230612
          component=kube-proxy,tier=node

NAMESPACE         NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS                          IMAGES
                                                                                     SELECTOR
calico-system     deployment.apps/calico-kube-controllers                  1/1     1            1           159m   calico-kube-controllers             mcr.microsoft.com/oss/calico/kube-controllers:v3.24.0                                                                     k8s-app=calico-kube-controllers
calico-system     deployment.apps/calico-typha                             1/1     1            1           159m   calico-typha                        mcr.microsoft.com/oss/calico/typha:v3.24.0                                                                                k8s-app=calico-typha
default           deployment.apps/nginx-ingress-ingress-nginx-controller   1/1     1            1           129m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
kube-system       deployment.apps/coredns                                  2/2     2            2           160m   coredns                             mcr.microsoft.com/oss/kubernetes/coredns:v1.9.4                                                                           k8s-app=kube-dns,version=v20
kube-system       deployment.apps/coredns-autoscaler                       1/1     1            1           160m   autoscaler                          mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5.3                                       k8s-app=coredns-autoscaler
kube-system       deployment.apps/konnectivity-agent                       2/2     2            2           160m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent
kube-system       deployment.apps/metrics-server                           2/2     2            2           160m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server
tigera-operator   deployment.apps/tigera-operator                          1/1     1            1           160m   tigera-operator                     mcr.microsoft.com/oss/tigera/operator:v1.28.0                                                                             name=tigera-operator

NAMESPACE         NAME                                                                DESIRED   CURRENT   READY   AGE    CONTAINERS                          IMAGES
                                                                                           SELECTOR
calico-system     replicaset.apps/calico-kube-controllers-684bbcff79                  1         1         1       159m   calico-kube-controllers             mcr.microsoft.com/oss/calico/kube-controllers:v3.24.0                                                                     k8s-app=calico-kube-controllers,pod-template-hash=684bbcff79
calico-system     replicaset.apps/calico-typha-59f86d8879                             1         1         1       159m   calico-typha                        mcr.microsoft.com/oss/calico/typha:v3.24.0                                                                                k8s-app=calico-typha,pod-template-hash=59f86d8879
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-75f585d85c   0         0         0       92m    controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75f585d85c
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-7c9f44b5f8   1         1         1       129m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c9f44b5f8
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-84bf68bf66   0         0         0       122m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84bf68bf66
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-84c6679d7    0         0         0       100m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84c6679d7
kube-system       replicaset.apps/coredns-autoscaler-69b7556b86                       1         1         1       160m   autoscaler                          mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5.3                                       k8s-app=coredns-autoscaler,pod-template-hash=69b7556b86
kube-system       replicaset.apps/coredns-fb6b9d95f                                   2         2         2       160m   coredns                             mcr.microsoft.com/oss/kubernetes/coredns:v1.9.4                                                                           k8s-app=kube-dns,pod-template-hash=fb6b9d95f,version=v20
kube-system       replicaset.apps/konnectivity-agent-694c59778                        2         2         2       153m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent,pod-template-hash=694c59778
kube-system       replicaset.apps/konnectivity-agent-79f9756b76                       0         0         0       160m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent,pod-template-hash=79f9756b76
kube-system       replicaset.apps/metrics-server-5dd7f7965f                           0         0         0       158m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=5dd7f7965f
kube-system       replicaset.apps/metrics-server-67db6db9b5                           2         2         2       131m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=67db6db9b5
kube-system       replicaset.apps/metrics-server-845978bcd7                           0         0         0       146m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=845978bcd7
tigera-operator   replicaset.apps/tigera-operator-6db9d9c5d9                          1         1         1       160m   tigera-operator                     mcr.microsoft.com/oss/tigera/operator:v1.28.0
Name:             nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv
Namespace:        default
Priority:         0
Service Account:  nginx-ingress-ingress-nginx
Node:             aks-agentpool-15100971-vmss000000/10.224.0.6
Start Time:       Wed, 05 Jul 2023 11:07:33 +0100
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=nginx-ingress
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.8.0
                  helm.sh/chart=ingress-nginx-4.7.0
                  pod-template-hash=7c9f44b5f8
Annotations:      cni.projectcalico.org/containerID: 966af3502d17abdccacba182aab4cbf1937a915fe777bb68ee6f3d7c32745d55
                  cni.projectcalico.org/podIP: 10.244.2.89/32
                  cni.projectcalico.org/podIPs: 10.244.2.89/32
Status:           Running
IP:               10.244.2.89
IPs:
  IP:           10.244.2.89
Controlled By:  ReplicaSet/nginx-ingress-ingress-nginx-controller-7c9f44b5f8
Containers:
  controller:
    Container ID:  containerd://2a6c9f37916044f9729cee4b075232d9c05963aaaab7d7f0a1ad4e9da56d64a8
    Image:         registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
      --election-id=nginx-ingress-ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Wed, 05 Jul 2023 11:07:34 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  192Mi
    Requests:
      cpu:      100m
      memory:   128Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv (v1:metadata.name)
      POD_NAMESPACE:  default (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fftq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-ingress-nginx-admission
    Optional:    false
  kube-api-access-4fftq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
Name:                     nginx-ingress-ingress-nginx-controller
Namespace:                default
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=nginx-ingress
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.8.0
                          helm.sh/chart=ingress-nginx-4.7.0
Annotations:              meta.helm.sh/release-name: nginx-ingress
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.0.71.84
IPs:                      10.0.71.84
LoadBalancer Ingress:     20.26.39.76
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31354/TCP
Endpoints:                10.244.2.89:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32267/TCP
Endpoints:                10.244.2.89:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
NAME                                                          READY   STATUS    RESTARTS   AGE   IP            NODE                                NOMINATED NODE   READINESS GATES
pod/nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv   1/1     Running   0          86m   10.244.2.89   aks-agentpool-15100971-vmss000000   <none>           <none>

NAME                                                       TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
service/kubernetes                                         ClusterIP      10.0.0.1      <none>        443/TCP                      164m   <none>
service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.71.84    20.26.39.76   80:31354/TCP,443:32267/TCP   132m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.206.31   <none>        443/TCP                      132m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES
                                            SELECTOR
deployment.apps/nginx-ingress-ingress-nginx-controller   1/1     1            1           132m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

NAME                                                                DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES
                                                  SELECTOR
replicaset.apps/nginx-ingress-ingress-nginx-controller-75f585d85c   0         0         0       95m    controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75f585d85c
replicaset.apps/nginx-ingress-ingress-nginx-controller-7c9f44b5f8   1         1         1       132m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c9f44b5f8
replicaset.apps/nginx-ingress-ingress-nginx-controller-84bf68bf66   0         0         0       125m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84bf68bf66
replicaset.apps/nginx-ingress-ingress-nginx-controller-84c6679d7    0         0         0       102m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84c6679d7

How to reproduce this issue:

This issue has been tested and reproducible 100% of the time on Azure Kubernetes Service

First step would be do deploy an AKS service with nodes Standard_B8ms or higher. Lower class nodes don't seem to have this problem.

Simply enable scaling when installing the chart and you should see the behaviour reported.

helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.autoscaling.enabled=true

Anything else we need to know:

I encountered this anomaly on a cluster with B12ms VMs used as nodes and started testing possible causes. Noticed that up until B4ms this does not happen. I can't figure out why the exact same configuration misbehave on nodes with VM with more memory available but what seems to happen is that the pod is deployed and as soon as I can get some metrics out, it's RAM usage is > 80% which trigger the deployment of new replicas.

The initial cluster I have made aware of the issue had only nginx, cert-manager, grafana, prometheus and loki deployed. After some consideration and experiments I deployed a new cluster from scratch and deployed only nginx-ingress in it via helm, the behaviour kept happening in the same way.

I tried increasing/lowering maxReplica and it always deploys all available replicas. I also tried enabling the explicit scaleUp and scaleDown policies in the chart, still, it only increases the amount of replicas and never scale them down.

During my tests I also tried using the two previous version of the helm chart (corresponding to the app versions 1.7.1 and 1.8.0) and the behaviour was the same.

If someone can check if it's happening with other cloud providers when using similar hardware for the nodes, it could help understanding if I need instead to go to Microsoft asking for clarifications.

longwuyuan commented 1 year ago

/remove-kind bug /triage needs-information

show helm get values <helmreleasename>

cvallesi-kainos commented 1 year ago

Sure:

USER-SUPPLIED VALUES:
controller:
  autoscaling:
    enabled: true
github-actions[bot] commented 1 year ago

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

philipp-durrer-jarowa commented 1 year ago

We have the same problem with our nginx-ingress deployment on AKS (however we use Standard_B2ms machines). I wonder if the autoscaling feature necessarily needs resource limits to be set so it can evaluate what exactly 50% or 80% of cpu/memory usage is.

values.yaml:

controller:
  service:
    externalTrafficPolicy: Local
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
  extraArgs:
    enable-ssl-passthrough: "" # Needed for Coturn SSL forwarding
  allowSnippetAnnotations: true # Needed for Jitsi Web /config.js block
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 4
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80
tomaustin700 commented 9 months ago

Has this been fixed? I deployed nginx via helm a few days ago with this config:

  set {
    name  = "controller.autoscaling.minReplicas"
    value = "1"
  }

  set {
    name  = "controller.autoscaling.maxReplicas"
    value = "2"
  }

And only one pod was created. In another cluster when I've done this in the past it created two.

longwuyuan commented 2 months ago

/remove-triage needs-information /kind bug /triage accepted

longwuyuan commented 2 months ago

/remove-lifecycle frozen

grzegorzgniadek commented 4 weeks ago

Hi, controller in idle state use ~60/120Mi of memory when you got

 Limits: 
   cpu: 100m 
   memory:  192Mi 
 Requests: 
   cpu:      100m 
   memory:   128Mi 

and enabled autoscaling with default 50% targetMemoryUtilizationPercentage hpa will always scale up pods to maxReplicas