kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.35k stars 8.23k forks source link

Website content downloads extremely slow. #6966

Closed MitchDart closed 3 years ago

MitchDart commented 3 years ago

Hi All,

I attempted to ask on Slack but this issue is now critical for us and this is my last resort. Content for websites downloads extremely slowly from South Africa. We are using Nginx Ingress deployed using Helm on GKE. I will link the values file at the end of this post.

Example Here is a Grafana dashboard I deployed. I have two ingresses pointing to the same dashboard, one is Nginx one is GCE load balancer. The GCE loads in a few seconds for me, the Nginx sometimes takes up to two minutes to load all the resources. https://dashboard.sticitt.co.za (Nginx Ingress) https://dashboard-gcp.sticitt.co.za (GCE Ingress)

Nginx controller deployed using helm: Chart version: v3.24.0 Nginx Ingress version: v0.44.0

I have tried many things to solve this but it seems to be limiting the bandwidth somehow. If I disable Http2 from my browser it loads quite a bit faster but still no-where near what it should be. Since Http2 uses a single connection it seems that each connection is limited on bandwidth. Any help would be extremely appreciated. Here are my Helm values:

helm-values.yaml ``` ## helm upgrade nginx-controller ingress-nginx/ingress-nginx --values nginx.yaml --version 3.7.1 --namespace services ## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/configuration.md ## controller: image: repository: k8s.gcr.io/ingress-nginx/controller tag: "v0.44.0" digest: sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a pullPolicy: IfNotPresent # www-data -> uid 101 runAsUser: 101 allowPrivilegeEscalation: true # Configures the ports the nginx-controller listens on containerPort: http: 80 https: 443 # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: {} ## Annotations to be added to the controller config configuration configmap ## configAnnotations: {} # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers proxySetHeaders: {} # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers addHeaders: {} # Optionally customize the pod dnsConfig. dnsConfig: {} # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. dnsPolicy: ClusterFirst # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply reportNodeInternalIp: false # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 # is merged hostNetwork: false ## Use host ports 80 and 443 ## Disabled by default ## hostPort: enabled: false ports: http: 80 https: 443 ## Election ID to use for status update ## electionID: ingress-controller-leader ## Name of the ingress class to route through this controller ## ingressClass: nginx # labels to add to the pod container metadata podLabels: {} # key: value ## Security Context policies for controller pods ## podSecurityContext: {} ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ### sysctls: {} # sysctls: # "net.core.somaxconn": "8192" ## Allows customization of the source of the IP address or FQDN to report ## in the ingress status field. By default, it reads the information provided ## by the service. If disable, the status field reports the IP address of the ## node or nodes where an ingress controller pod is running. publishService: enabled: true ## Allows overriding of the publish service to bind to ## Must be / ## pathOverride: "" ## Limit the scope of the controller ## scope: enabled: false namespace: "" # defaults to .Release.Namespace ## Allows customization of the configmap / nginx-configmap namespace ## configMapNamespace: "" # defaults to .Release.Namespace ## Allows customization of the tcp-services-configmap ## tcp: configMapNamespace: "" # defaults to .Release.Namespace ## Annotations to be added to the tcp config configmap annotations: {} ## Allows customization of the udp-services-configmap ## udp: configMapNamespace: "" # defaults to .Release.Namespace ## Annotations to be added to the udp config configmap annotations: {} # Maxmind license key to download GeoLite2 Databases # https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases maxmindLicenseKey: "" ## Additional command line arguments to pass to nginx-ingress-controller ## E.g. to specify the default SSL certificate you can use ## extraArgs: ## default-ssl-certificate: "/" extraArgs: enable-ssl-passthrough: "true" ## Additional environment variables to set extraEnvs: [] # extraEnvs: # - name: FOO # valueFrom: # secretKeyRef: # key: FOO # name: secret-resource ## DaemonSet or Deployment ## kind: Deployment ## Annotations to be added to the controller Deployment or DaemonSet ## annotations: {} # keel.sh/pollSchedule: "@every 60m" ## Labels to be added to the controller Deployment or DaemonSet ## labels: {} # keel.sh/policy: patch # keel.sh/trigger: poll # The update strategy to apply to the Deployment or DaemonSet ## updateStrategy: {} # rollingUpdate: # maxUnavailable: 1 # type: RollingUpdate # minReadySeconds to avoid killing pods before we are ready ## minReadySeconds: 0 ## Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" ## Affinity and anti-affinity ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## affinity: # # An example of preferred pod anti-affinity, weight is in the range 1-100 # podAntiAffinity: # preferredDuringSchedulingIgnoredDuringExecution: # - weight: 100 # podAffinityTerm: # labelSelector: # matchExpressions: # - key: app.kubernetes.io/name # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/instance # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/component # operator: In # values: # - controller # topologyKey: kubernetes.io/hostname # # An example of required pod anti-affinity podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - ingress-nginx - key: app.kubernetes.io/instance operator: In values: - ingress-nginx - key: app.kubernetes.io/component operator: In values: - controller topologyKey: "kubernetes.io/hostname" ## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ ## topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: failure-domain.beta.kubernetes.io/zone # whenUnsatisfiable: DoNotSchedule # labelSelector: # matchLabels: # app.kubernetes.io/instance: ingress-nginx-internal ## terminationGracePeriodSeconds ## wait up to five minutes for the drain of connections ## terminationGracePeriodSeconds: 300 ## Node labels for controller pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: kubernetes.io/os: linux ## Liveness and readiness probe values ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## livenessProbe: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 # Path of the health check endpoint. All requests received on the port defined by # the healthz-port parameter are forwarded internally to this path. healthCheckPath: "/healthz" ## Annotations to be added to controller pods ## podAnnotations: {} replicaCount: 3 minAvailable: 1 # Define requests resources to avoid probe issues due to CPU utilization in busy nodes # ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903 # Ideally, there should be no limits. # https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/ resources: # limits: # cpu: 100m # memory: 90Mi requests: cpu: 100m memory: 90Mi # Mutually exclusive with keda autoscaling autoscaling: enabled: false minReplicas: 1 maxReplicas: 11 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: nginx_ingress_controller_nginx_process_requests_total # target: # type: AverageValue # averageValue: 10000m # Mutually exclusive with hpa autoscaling keda: apiVersion: "keda.sh/v1alpha1" # apiVersion changes with keda 1.x vs 2.x # 2.x = keda.sh/v1alpha1 # 1.x = keda.k8s.io/v1alpha1 enabled: false minReplicas: 1 maxReplicas: 11 pollingInterval: 30 cooldownPeriod: 300 restoreToOriginalReplicaCount: false scaledObject: annotations: {} # Custom annotations for ScaledObject resource # annotations: # key: value triggers: [] # - type: prometheus # metadata: # serverAddress: http://:9090 # metricName: http_requests_total # threshold: '100' # query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) behavior: {} # scaleDown: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 1 # periodSeconds: 180 # scaleUp: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 2 # periodSeconds: 60 ## Enable mimalloc as a drop-in replacement for malloc. ## ref: https://github.com/microsoft/mimalloc ## enableMimalloc: true ## Override NGINX template customTemplate: configMapName: "" configMapKey: "" service: enabled: true annotations: {} labels: {} # clusterIP: "" ## List of IP addresses at which the controller services are available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] enableHttp: true enableHttps: true ## Set external traffic policy to: "Local" to preserve source IP on ## providers supporting it ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" # Must be either "None" or "ClientIP" if set. Kubernetes will default to "None". # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies # sessionAffinity: "" # specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified, # the service controller allocates a port from your cluster’s NodePort range. # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip # healthCheckNodePort: 0 ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer # type: NodePort # nodePorts: # http: 32080 # https: 32443 # tcp: # 8080: 32808 nodePorts: http: "" https: "" tcp: {} udp: {} ## Enables an additional internal load balancer (besides the external one). ## Annotations are mandatory for the load balancer to come up. Varies with the cloud service. internal: enabled: false annotations: {} # loadBalancerIP: "" ## Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0. loadBalancerSourceRanges: [] ## Set external traffic policy to: "Local" to preserve source IP on ## providers supporting it ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" extraContainers: [] ## Additional containers to be added to the controller pod. ## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example. # - name: my-sidecar # image: nginx:latest # - name: lemonldap-ng-controller # image: lemonldapng/lemonldap-ng-controller:0.2.0 # args: # - /lemonldap-ng-controller # - --alsologtostderr # - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration # env: # - name: POD_NAME # valueFrom: # fieldRef: # fieldPath: metadata.name # - name: POD_NAMESPACE # valueFrom: # fieldRef: # fieldPath: metadata.namespace # volumeMounts: # - name: copy-portal-skins # mountPath: /srv/var/lib/lemonldap-ng/portal/skins extraVolumeMounts: [] ## Additional volumeMounts to the controller main container. # - name: copy-portal-skins # mountPath: /var/lib/lemonldap-ng/portal/skins extraVolumes: [] ## Additional volumes to the controller pod. # - name: copy-portal-skins # emptyDir: {} extraInitContainers: [] ## Containers, which are run before the app containers are started. # - name: init-myservice # image: busybox # command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;'] admissionWebhooks: annotations: {} enabled: true failurePolicy: Fail # timeoutSeconds: 10 port: 8443 certificate: "/usr/local/certificates/cert" key: "/usr/local/certificates/key" namespaceSelector: {} objectSelector: {} service: annotations: {} # clusterIP: "" externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP patch: enabled: true image: repository: docker.io/jettech/kube-webhook-certgen tag: v1.5.1 pullPolicy: IfNotPresent ## Provide a priority class name to the webhook patching job ## priorityClassName: "" podAnnotations: {} nodeSelector: {} tolerations: [] runAsUser: 2000 metrics: port: 10254 # if this port is changed, change healthz-port: in extraArgs: accordingly enabled: true service: annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254" # clusterIP: "" ## List of IP addresses at which the stats-exporter service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 9913 type: ClusterIP # externalTrafficPolicy: "" # nodePort: "" serviceMonitor: enabled: false additionalLabels: {} namespace: "" namespaceSelector: {} # Default: scrape .Release.Namespace only # To scrape all, use the following: # namespaceSelector: # any: true scrapeInterval: 30s # honorLabels: true targetLabels: [] metricRelabelings: [] prometheusRule: enabled: false additionalLabels: {} # namespace: "" rules: [] # # These are just examples rules, please adapt them to your needs # - alert: NGINXConfigFailed # expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0 # for: 1s # labels: # severity: critical # annotations: # description: bad ingress config - nginx config test failed # summary: uninstall the latest ingress changes to allow config reloads to resume # - alert: NGINXCertificateExpiry # expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800 # for: 1s # labels: # severity: critical # annotations: # description: ssl certificate(s) will expire in less then a week # summary: renew expiring certificates to avoid downtime # - alert: NGINXTooMany500s # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 # for: 1m # labels: # severity: warning # annotations: # description: Too many 5XXs # summary: More than 5% of all requests returned 5XX, this requires your attention # - alert: NGINXTooMany400s # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 # for: 1m # labels: # severity: warning # annotations: # description: Too many 4XXs # summary: More than 5% of all requests returned 4XX, this requires your attention ## Improve connection draining when ingress controller pod is deleted using a lifecycle hook: ## With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds ## to 300, allowing the draining of connections up to five minutes. ## If the active connections end before that, the pod will terminate gracefully at that time. ## To effectively take advantage of this feature, the Configmap feature ## worker-shutdown-timeout new value is 240s instead of 10s. ## lifecycle: preStop: exec: command: - /wait-shutdown priorityClassName: "" ## Rollback limit ## revisionHistoryLimit: 10 ## Default 404 backend ## defaultBackend: ## enabled: false name: defaultbackend image: repository: k8s.gcr.io/defaultbackend-amd64 tag: "1.5" pullPolicy: IfNotPresent # nobody user -> uid 65534 runAsUser: 65534 runAsNonRoot: true readOnlyRootFilesystem: true allowPrivilegeEscalation: false extraArgs: {} serviceAccount: create: true name: "" ## Additional environment variables to set for defaultBackend pods extraEnvs: [] port: 8080 ## Readiness and liveness probes for default backend ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ ## livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 ## Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" affinity: {} ## Security Context policies for controller pods ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ## podSecurityContext: {} # labels to add to the pod container metadata podLabels: {} # key: value ## Node labels for default backend pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} ## Annotations to be added to default backend pods ## podAnnotations: {} replicaCount: 1 minAvailable: 1 resources: {} # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi extraVolumeMounts: [] ## Additional volumeMounts to the default backend container. # - name: copy-portal-skins # mountPath: /var/lib/lemonldap-ng/portal/skins extraVolumes: [] ## Additional volumes to the default backend pod. # - name: copy-portal-skins # emptyDir: {} autoscaling: enabled: false minReplicas: 1 maxReplicas: 2 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 service: annotations: {} # clusterIP: "" ## List of IP addresses at which the default backend service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP priorityClassName: "" ## Enable RBAC as per https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx and https://github.com/kubernetes/ingress/issues/266 rbac: create: true scope: false # If true, create & use Pod Security Policy resources # https://kubernetes.io/docs/concepts/policy/pod-security-policy/ podSecurityPolicy: enabled: false serviceAccount: create: true name: "" ## Optional array of imagePullSecrets containing private registry credentials ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ imagePullSecrets: [] # - name: secretName # TCP service key:value pairs # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tcp ## tcp: {} # 8080: "default/example-tcp-svc:9000" # UDP service key:value pairs # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp ## udp: {} # 53: "kube-system/kube-dns:53" # A base64ed Diffie-Hellman parameter # This can be generated with: openssl dhparam 4096 2> /dev/null | base64 # Ref: https://github.com/krmichel/ingress-nginx/blob/master/docs/examples/customization/ssl-dh-param dhParam: ```
nginx-ingress.yaml ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: nginx-monitoring namespace: monitoring spec: rules: - host: dashboard.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard.sticitt.co.za secretName: **** ```
gce-ingress.yaml ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: gcp-monitoring namespace: monitoring spec: rules: - host: dashboard-gcp.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard-gcp.sticitt.co.za secretName: **** ```

/triage support

k8s-ci-robot commented 3 years ago

@MitchDart: The label(s) triage/support cannot be applied, because the repository doesn't have them.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/6966): >Hi All, > >I attempted to ask on Slack but this issue is now critical for us and this is my last resort. Content for websites downloads extremely slowly from South Africa. We are using Nginx Ingress deployed using Helm on GKE. I will link the values file at the end of this post. > >**Example** >Here is a Grafana dashboard I deployed. I have two ingresses pointing to the same dashboard, one is Nginx one is GCE load balancer. The GCE loads in a few seconds for me, the Nginx sometimes takes up to two minutes to load all the resources. >[https://dashboard.sticitt.co.za](https://dashboard.sticitt.co.za) (Nginx Ingress) >[https://dashboard-gcp.sticitt.co.za](https://dashboard-gcp.sticitt.co.za) (GCE Ingress) > >Nginx controller deployed using helm: >Chart version: v3.24.0 >Nginx Ingress version: v0.44.0 > >I have tried many things to solve this but it seems to be limiting the bandwidth somehow. If I disable Http2 from my browser it loads quite a bit faster but still no-where near what it should be. Since Http2 uses a single connection it seems that each connection is limited on bandwidth. Any help would be extremely appreciated. Here are my Helm values: > >
helm-values.yaml > >``` > ## helm upgrade nginx-controller ingress-nginx/ingress-nginx --values nginx.yaml --version 3.7.1 --namespace services > ## nginx configuration > ## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/configuration.md > ## > controller: > image: > repository: k8s.gcr.io/ingress-nginx/controller > tag: "v0.44.0" > digest: sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a > pullPolicy: IfNotPresent > # www-data -> uid 101 > runAsUser: 101 > allowPrivilegeEscalation: true > > # Configures the ports the nginx-controller listens on > containerPort: > http: 80 > https: 443 > > # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ > config: {} > > ## Annotations to be added to the controller config configuration configmap > ## > configAnnotations: {} > > # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers > proxySetHeaders: {} > > # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers > addHeaders: {} > > # Optionally customize the pod dnsConfig. > dnsConfig: {} > > # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. > # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller > # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. > dnsPolicy: ClusterFirst > > # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network > # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply > reportNodeInternalIp: false > > # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), > # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 > # is merged > hostNetwork: false > > ## Use host ports 80 and 443 > ## Disabled by default > ## > hostPort: > enabled: false > ports: > http: 80 > https: 443 > > ## Election ID to use for status update > ## > electionID: ingress-controller-leader > > ## Name of the ingress class to route through this controller > ## > ingressClass: nginx > > # labels to add to the pod container metadata > podLabels: {} > # key: value > > ## Security Context policies for controller pods > ## > podSecurityContext: {} > > ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for > ## notes on enabling and using sysctls > ### > sysctls: {} > # sysctls: > # "net.core.somaxconn": "8192" > > ## Allows customization of the source of the IP address or FQDN to report > ## in the ingress status field. By default, it reads the information provided > ## by the service. If disable, the status field reports the IP address of the > ## node or nodes where an ingress controller pod is running. > publishService: > enabled: true > ## Allows overriding of the publish service to bind to > ## Must be / > ## > pathOverride: "" > > ## Limit the scope of the controller > ## > scope: > enabled: false > namespace: "" # defaults to .Release.Namespace > > ## Allows customization of the configmap / nginx-configmap namespace > ## > configMapNamespace: "" # defaults to .Release.Namespace > > ## Allows customization of the tcp-services-configmap > ## > tcp: > configMapNamespace: "" # defaults to .Release.Namespace > ## Annotations to be added to the tcp config configmap > annotations: {} > > ## Allows customization of the udp-services-configmap > ## > udp: > configMapNamespace: "" # defaults to .Release.Namespace > ## Annotations to be added to the udp config configmap > annotations: {} > > # Maxmind license key to download GeoLite2 Databases > # https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases > maxmindLicenseKey: "" > > ## Additional command line arguments to pass to nginx-ingress-controller > ## E.g. to specify the default SSL certificate you can use > ## extraArgs: > ## default-ssl-certificate: "/" > extraArgs: > enable-ssl-passthrough: "true" > > ## Additional environment variables to set > extraEnvs: [] > # extraEnvs: > # - name: FOO > # valueFrom: > # secretKeyRef: > # key: FOO > # name: secret-resource > > ## DaemonSet or Deployment > ## > kind: Deployment > > ## Annotations to be added to the controller Deployment or DaemonSet > ## > annotations: {} > # keel.sh/pollSchedule: "@every 60m" > > ## Labels to be added to the controller Deployment or DaemonSet > ## > labels: {} > # keel.sh/policy: patch > # keel.sh/trigger: poll > > > # The update strategy to apply to the Deployment or DaemonSet > ## > updateStrategy: {} > # rollingUpdate: > # maxUnavailable: 1 > # type: RollingUpdate > > # minReadySeconds to avoid killing pods before we are ready > ## > minReadySeconds: 0 > > > ## Node tolerations for server scheduling to nodes with taints > ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ > ## > tolerations: [] > # - key: "key" > # operator: "Equal|Exists" > # value: "value" > # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" > > ## Affinity and anti-affinity > ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity > ## > affinity: > # # An example of preferred pod anti-affinity, weight is in the range 1-100 > # podAntiAffinity: > # preferredDuringSchedulingIgnoredDuringExecution: > # - weight: 100 > # podAffinityTerm: > # labelSelector: > # matchExpressions: > # - key: app.kubernetes.io/name > # operator: In > # values: > # - ingress-nginx > # - key: app.kubernetes.io/instance > # operator: In > # values: > # - ingress-nginx > # - key: app.kubernetes.io/component > # operator: In > # values: > # - controller > # topologyKey: kubernetes.io/hostname > > # # An example of required pod anti-affinity > podAntiAffinity: > requiredDuringSchedulingIgnoredDuringExecution: > - labelSelector: > matchExpressions: > - key: app.kubernetes.io/name > operator: In > values: > - ingress-nginx > - key: app.kubernetes.io/instance > operator: In > values: > - ingress-nginx > - key: app.kubernetes.io/component > operator: In > values: > - controller > topologyKey: "kubernetes.io/hostname" > > ## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. > ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ > ## > topologySpreadConstraints: [] > # - maxSkew: 1 > # topologyKey: failure-domain.beta.kubernetes.io/zone > # whenUnsatisfiable: DoNotSchedule > # labelSelector: > # matchLabels: > # app.kubernetes.io/instance: ingress-nginx-internal > > ## terminationGracePeriodSeconds > ## wait up to five minutes for the drain of connections > ## > terminationGracePeriodSeconds: 300 > > ## Node labels for controller pod assignment > ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ > ## > nodeSelector: > kubernetes.io/os: linux > > ## Liveness and readiness probe values > ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes > ## > livenessProbe: > failureThreshold: 5 > initialDelaySeconds: 10 > periodSeconds: 10 > successThreshold: 1 > timeoutSeconds: 1 > port: 10254 > readinessProbe: > failureThreshold: 3 > initialDelaySeconds: 10 > periodSeconds: 10 > successThreshold: 1 > timeoutSeconds: 1 > port: 10254 > > # Path of the health check endpoint. All requests received on the port defined by > # the healthz-port parameter are forwarded internally to this path. > healthCheckPath: "/healthz" > > ## Annotations to be added to controller pods > ## > podAnnotations: {} > > replicaCount: 3 > > minAvailable: 1 > > # Define requests resources to avoid probe issues due to CPU utilization in busy nodes > # ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903 > # Ideally, there should be no limits. > # https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/ > resources: > # limits: > # cpu: 100m > # memory: 90Mi > requests: > cpu: 100m > memory: 90Mi > > # Mutually exclusive with keda autoscaling > autoscaling: > enabled: false > minReplicas: 1 > maxReplicas: 11 > targetCPUUtilizationPercentage: 50 > targetMemoryUtilizationPercentage: 50 > > autoscalingTemplate: [] > # Custom or additional autoscaling metrics > # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics > # - type: Pods > # pods: > # metric: > # name: nginx_ingress_controller_nginx_process_requests_total > # target: > # type: AverageValue > # averageValue: 10000m > > # Mutually exclusive with hpa autoscaling > keda: > apiVersion: "keda.sh/v1alpha1" > # apiVersion changes with keda 1.x vs 2.x > # 2.x = keda.sh/v1alpha1 > # 1.x = keda.k8s.io/v1alpha1 > enabled: false > minReplicas: 1 > maxReplicas: 11 > pollingInterval: 30 > cooldownPeriod: 300 > restoreToOriginalReplicaCount: false > scaledObject: > annotations: {} > # Custom annotations for ScaledObject resource > # annotations: > # key: value > triggers: [] > # - type: prometheus > # metadata: > # serverAddress: http://:9090 > # metricName: http_requests_total > # threshold: '100' > # query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) > > behavior: {} > # scaleDown: > # stabilizationWindowSeconds: 300 > # policies: > # - type: Pods > # value: 1 > # periodSeconds: 180 > # scaleUp: > # stabilizationWindowSeconds: 300 > # policies: > # - type: Pods > # value: 2 > # periodSeconds: 60 > > ## Enable mimalloc as a drop-in replacement for malloc. > ## ref: https://github.com/microsoft/mimalloc > ## > enableMimalloc: true > > ## Override NGINX template > customTemplate: > configMapName: "" > configMapKey: "" > > service: > enabled: true > > annotations: {} > labels: {} > # clusterIP: "" > > ## List of IP addresses at which the controller services are available > ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips > ## > externalIPs: [] > > # loadBalancerIP: "" > loadBalancerSourceRanges: [] > > enableHttp: true > enableHttps: true > > ## Set external traffic policy to: "Local" to preserve source IP on > ## providers supporting it > ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer > # externalTrafficPolicy: "" > > # Must be either "None" or "ClientIP" if set. Kubernetes will default to "None". > # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies > # sessionAffinity: "" > > # specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified, > # the service controller allocates a port from your cluster’s NodePort range. > # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip > # healthCheckNodePort: 0 > > ports: > http: 80 > https: 443 > > targetPorts: > http: http > https: https > > type: LoadBalancer > > # type: NodePort > # nodePorts: > # http: 32080 > # https: 32443 > # tcp: > # 8080: 32808 > nodePorts: > http: "" > https: "" > tcp: {} > udp: {} > > ## Enables an additional internal load balancer (besides the external one). > ## Annotations are mandatory for the load balancer to come up. Varies with the cloud service. > internal: > enabled: false > annotations: {} > > # loadBalancerIP: "" > > ## Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0. > loadBalancerSourceRanges: [] > > ## Set external traffic policy to: "Local" to preserve source IP on > ## providers supporting it > ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer > # externalTrafficPolicy: "" > > extraContainers: [] > ## Additional containers to be added to the controller pod. > ## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example. > # - name: my-sidecar > # image: nginx:latest > # - name: lemonldap-ng-controller > # image: lemonldapng/lemonldap-ng-controller:0.2.0 > # args: > # - /lemonldap-ng-controller > # - --alsologtostderr > # - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration > # env: > # - name: POD_NAME > # valueFrom: > # fieldRef: > # fieldPath: metadata.name > # - name: POD_NAMESPACE > # valueFrom: > # fieldRef: > # fieldPath: metadata.namespace > # volumeMounts: > # - name: copy-portal-skins > # mountPath: /srv/var/lib/lemonldap-ng/portal/skins > > extraVolumeMounts: [] > ## Additional volumeMounts to the controller main container. > # - name: copy-portal-skins > # mountPath: /var/lib/lemonldap-ng/portal/skins > > extraVolumes: [] > ## Additional volumes to the controller pod. > # - name: copy-portal-skins > # emptyDir: {} > > extraInitContainers: [] > ## Containers, which are run before the app containers are started. > # - name: init-myservice > # image: busybox > # command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;'] > > admissionWebhooks: > annotations: {} > enabled: true > failurePolicy: Fail > # timeoutSeconds: 10 > port: 8443 > certificate: "/usr/local/certificates/cert" > key: "/usr/local/certificates/key" > namespaceSelector: {} > objectSelector: {} > > service: > annotations: {} > # clusterIP: "" > externalIPs: [] > # loadBalancerIP: "" > loadBalancerSourceRanges: [] > servicePort: 443 > type: ClusterIP > > patch: > enabled: true > image: > repository: docker.io/jettech/kube-webhook-certgen > tag: v1.5.1 > pullPolicy: IfNotPresent > ## Provide a priority class name to the webhook patching job > ## > priorityClassName: "" > podAnnotations: {} > nodeSelector: {} > tolerations: [] > runAsUser: 2000 > > metrics: > port: 10254 > # if this port is changed, change healthz-port: in extraArgs: accordingly > enabled: true > > service: > annotations: > prometheus.io/scrape: "true" > prometheus.io/port: "10254" > > # clusterIP: "" > > ## List of IP addresses at which the stats-exporter service is available > ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips > ## > externalIPs: [] > > # loadBalancerIP: "" > loadBalancerSourceRanges: [] > servicePort: 9913 > type: ClusterIP > # externalTrafficPolicy: "" > # nodePort: "" > > serviceMonitor: > enabled: false > additionalLabels: {} > namespace: "" > namespaceSelector: {} > # Default: scrape .Release.Namespace only > # To scrape all, use the following: > # namespaceSelector: > # any: true > scrapeInterval: 30s > # honorLabels: true > targetLabels: [] > metricRelabelings: [] > > prometheusRule: > enabled: false > additionalLabels: {} > # namespace: "" > rules: [] > # # These are just examples rules, please adapt them to your needs > # - alert: NGINXConfigFailed > # expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0 > # for: 1s > # labels: > # severity: critical > # annotations: > # description: bad ingress config - nginx config test failed > # summary: uninstall the latest ingress changes to allow config reloads to resume > # - alert: NGINXCertificateExpiry > # expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800 > # for: 1s > # labels: > # severity: critical > # annotations: > # description: ssl certificate(s) will expire in less then a week > # summary: renew expiring certificates to avoid downtime > # - alert: NGINXTooMany500s > # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 > # for: 1m > # labels: > # severity: warning > # annotations: > # description: Too many 5XXs > # summary: More than 5% of all requests returned 5XX, this requires your attention > # - alert: NGINXTooMany400s > # expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 > # for: 1m > # labels: > # severity: warning > # annotations: > # description: Too many 4XXs > # summary: More than 5% of all requests returned 4XX, this requires your attention > > ## Improve connection draining when ingress controller pod is deleted using a lifecycle hook: > ## With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds > ## to 300, allowing the draining of connections up to five minutes. > ## If the active connections end before that, the pod will terminate gracefully at that time. > ## To effectively take advantage of this feature, the Configmap feature > ## worker-shutdown-timeout new value is 240s instead of 10s. > ## > lifecycle: > preStop: > exec: > command: > - /wait-shutdown > > priorityClassName: "" > > ## Rollback limit > ## > revisionHistoryLimit: 10 > > ## Default 404 backend > ## > defaultBackend: > ## > enabled: false > > name: defaultbackend > image: > repository: k8s.gcr.io/defaultbackend-amd64 > tag: "1.5" > pullPolicy: IfNotPresent > # nobody user -> uid 65534 > runAsUser: 65534 > runAsNonRoot: true > readOnlyRootFilesystem: true > allowPrivilegeEscalation: false > > extraArgs: {} > > serviceAccount: > create: true > name: "" > ## Additional environment variables to set for defaultBackend pods > extraEnvs: [] > > port: 8080 > > ## Readiness and liveness probes for default backend > ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ > ## > livenessProbe: > failureThreshold: 3 > initialDelaySeconds: 30 > periodSeconds: 10 > successThreshold: 1 > timeoutSeconds: 5 > readinessProbe: > failureThreshold: 6 > initialDelaySeconds: 0 > periodSeconds: 5 > successThreshold: 1 > timeoutSeconds: 5 > > ## Node tolerations for server scheduling to nodes with taints > ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ > ## > tolerations: [] > # - key: "key" > # operator: "Equal|Exists" > # value: "value" > # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" > > affinity: {} > > ## Security Context policies for controller pods > ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for > ## notes on enabling and using sysctls > ## > podSecurityContext: {} > > # labels to add to the pod container metadata > podLabels: {} > # key: value > > ## Node labels for default backend pod assignment > ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ > ## > nodeSelector: {} > > ## Annotations to be added to default backend pods > ## > podAnnotations: {} > > replicaCount: 1 > > minAvailable: 1 > > resources: {} > # limits: > # cpu: 10m > # memory: 20Mi > # requests: > # cpu: 10m > # memory: 20Mi > > extraVolumeMounts: [] > ## Additional volumeMounts to the default backend container. > # - name: copy-portal-skins > # mountPath: /var/lib/lemonldap-ng/portal/skins > > extraVolumes: [] > ## Additional volumes to the default backend pod. > # - name: copy-portal-skins > # emptyDir: {} > > autoscaling: > enabled: false > minReplicas: 1 > maxReplicas: 2 > targetCPUUtilizationPercentage: 50 > targetMemoryUtilizationPercentage: 50 > > service: > annotations: {} > > # clusterIP: "" > > ## List of IP addresses at which the default backend service is available > ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips > ## > externalIPs: [] > > # loadBalancerIP: "" > loadBalancerSourceRanges: [] > servicePort: 80 > type: ClusterIP > > priorityClassName: "" > > ## Enable RBAC as per https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx and https://github.com/kubernetes/ingress/issues/266 > rbac: > create: true > scope: false > > # If true, create & use Pod Security Policy resources > # https://kubernetes.io/docs/concepts/policy/pod-security-policy/ > podSecurityPolicy: > enabled: false > > serviceAccount: > create: true > name: "" > > ## Optional array of imagePullSecrets containing private registry credentials > ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ > imagePullSecrets: [] > # - name: secretName > > # TCP service key:value pairs > # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tcp > ## > tcp: {} > # 8080: "default/example-tcp-svc:9000" > > # UDP service key:value pairs > # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp > ## > udp: {} > # 53: "kube-system/kube-dns:53" > > # A base64ed Diffie-Hellman parameter > # This can be generated with: openssl dhparam 4096 2> /dev/null | base64 > # Ref: https://github.com/krmichel/ingress-nginx/blob/master/docs/examples/customization/ssl-dh-param > dhParam: >``` >
nginx-ingress.yaml ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: nginx-monitoring namespace: monitoring spec: rules: - host: dashboard.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard.sticitt.co.za secretName: **** ```
gce-ingress.yaml ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: gcp-monitoring namespace: monitoring spec: rules: - host: dashboard-gcp.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard-gcp.sticitt.co.za secretName: **** ```

/triage support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

longwuyuan commented 3 years ago

What does this mean ?

one is Nginx one is GCE load balancer

Show ;

kubectl get all,nodes,ing -A -o wide
kubectl describe <resourcetype> <resourcename> -n <reourcenamespace>    ... # for all related objects like pods, services, ingress-controllers, ingress-obects

/remove-kind support /triage needs-information

k8s-triage-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 3 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/6966#issuecomment-926877827): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.