Open mdellavedova opened 8 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
/triage needs-information
Sorry I posted by mistake before completing the form, please let me know if there's anything else I need to add
Hi, I can see the triage/needs-information
label is still there after I updated the form last week, could you please let me know if there's anything missing?
Thanks for your reply
- "Ignoring ingress" does not indicate that the ingress rules were used for routing
I'm sure the rules aren't used for routing although I have a large number of ingresses that get pointlessly evaluated causing an increase in load for 1 of the 3 pods in the deployment which lead to restarts. (every time there is a batch of "Ignoring ingress" errors in the logs one of the pod restarts)
- The most important aspect here is to confirm that you installed as per the link i pasted here earlier
I have followed that guide and double checked the configuration multiple times
- The proof needed is that appropriate controller instance processes appropriate ingress rule routing
that's confirmed, the 2 ingress controllers only process their own ingress rules, the issue is the "Ignoring ingress" errors and the associated pod restarts
I just tested 2 controllers on minikube and I could not reproduce the restart of pods
So it seems like that error and restart are coinciding for you but one is not related to the other.
Can you try to reproduce on minikube or kind cluster
I think you can look at
kubectl get events -A
dmesg
on nodesthanks for your effort, I believe the restarts are due to the number of ingress resources being evaluated, I have a similar setup in 3 separate regions:
region 1: total number of ingresses managed by both controllers: 1962 restarts controller 1: 33 over 20 days (1 of 3 pods only) restarts controller 2: 0 over 19 days
region 2: total number of ingresses managed by both controllers: 426 restarts controller 1: 0 over 20 days restarts controller 2: 123 over 19 days (1 of 3 pods only)
region 3: total number of ingresses managed by both controllers: 192 restarts controller 1: 0 over 20 days restarts controller 2: 0 over 19 days (1 of 3 pods only)
could you please re-run your test with a higher number of ingresses? I'm not sure why there is no correlation between the number of ingress resources and the number of restarts, I will try and look at the traffic in region 2 vs region 1
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
What happened:
I have 2 ingress controllers deployed in the same namespace, set up following the instructions in these documents: https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-cant-use-multiple-namespaces-what-should-i-do and https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-controllers The ingresses work as expected but when I look at the logs for one ingress controller I can see multiple errors:
suggesting that the ingress controller is considering ingresses that belong to the other ingress controller and vice-versa. This creates a high load on (one of) the ingress controller's pods causing it to restart. What you expected to happen: I would expect both ingress controllers to ignore ingresses which don't have their associated
ingressClassName
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller Release: v1.8.1 Build: dc88dce9ea5e700f3301d16f971fa17c6cfe757d Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6
I have also tried the latest available helm chart, which didn't help
NGINX Ingress controller Release: v1.9.5 Build: f503c4bb5fa7d857ad29e94970eb550c2bc00b7c Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6
Kubernetes version (use
kubectl version
):Environment:
uname -a
):Linux ip-10-229-145-39.eu-west-1.compute.internal 5.10.198-187.748.amzn2.x86_64 #1 SMP Tue Oct 24 19:49:54 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
the ingress controller(s) were installed using ArgoCD (which in terms uses helm). Helm values below:nginx-public-nlb-tls
ingress-controller-internal-nginx:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl -n <appnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know: