Closed husa570 closed 5 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
/triage needs-information
/remove-kind bug
* Can you please reproduce this on a minimal configuration on a cluster created using kind or minikube
I will try to see if I can do that
* Please describe why your ingress-nginx-controller service is of type ClusterIP
We use hostports on controller and have haproxy in front of the cluster
thanks for updating.
appreciate you will try reproducing in minikube or kind. Please ensure that you use a service --type LoadBalancer or the unique networking of kind configuration as we do it in CI https://github.com/kubernetes/ingress-nginx/blob/main/build/kind.yaml . We don't test this HAProxy in front of ingress-nginx networking in CI so this will help a lot
At some point of type I hope you will be testing with service type LoadBalancer in front of ingress-nginx as well as that long snippet for modsecurity. Idea is install metallb in minikube if you choose miniube, and configure the minikube ip-address as the starting and ending of the pool for minikube. That way the service type LoadBalancer gets that external-ip
I would test in stages ;
/kind support
if you meant to say you can reproduce on minikube, then please do this.
From your minikube cluster, copy/paste the output of commands here in one single post;
You can actually reduce he clutter here by deleting less informative posts and posting all that important minikube info in the original issue-description
Also, the controller v1.10.x is using nginx v1.25 (it was v1.21 earlier) so have to check if any upstream nginx changes impacted your log_format or nginx_vars or mosec config etc
Thanks.
I asked for those command outputs so I can reproduce. I suspect that if there is a genuine problem and if it is caused by the controller, then maybe the upgrade of the inernal component nginx (stating that nginx is a component of the controller) from v1.21 to v1.25 has introduced changes that are related.
if you meant to say you can reproduce on minikube, then please do this.
From your minikube cluster, copy/paste the output of commands here in one single post;
* kubectl cluster-info * helm -n ingress-ingress get values ingress-nginx * kubectl get all,ing -A -o wide * kubectl -n ingress-nginx get cm -o wide * kubectl -n ingress-nginx describe cm ingress-nginx-controller * kubectl -n ingress-nginx describe po $ingress-nginx-controller-pod-name * kubectl -n $appnamespace describe ing * kubectl -n $appnamespace logs $apppodname * Curl command complete and exactly as used with a -v and its reponse * kubectl -n ingress-nginx logs $ingress-nginx-controller-podname * Anh other related info
I will se what I can do, some of this commands extract information that might be sesitive for us, but parts of it I might be able to anonymize
Im stuck at the moment, I cant reproduce it in minkube. One differense between minikube and our kluster is that we use containerd (ver 1.7.10) and not docker, unfortunately I dont seem to have the knowlege to run minikube on containerd. So at the moment Im stuck with a the fact that the ingress-nginx nginx logs the same req_id twice (happend when we upgraded to 1.10.0) and modsecurity uses it own unique_id.
Deleted most of my "clutter" post and closing this issue unresolved
minikube start --container-runtime --help
should show you this
@husa570 we can do a zoom session if you think you are ok with that way to make progress
minikube start --container-runtime --help
should show you this@husa570 we can do a zoom session if you think you are ok with that way to make progress
Thanks but this was another deadend. Minkube worked as expected Minikube start
minikube start --container-runtime=containerd
😄 minikube v1.33.0 on Ubuntu 20.04 (amd64)
✨ Automatically selected the docker driver. Other choices: none, ssh
📌 Using Docker driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.43 ...
💾 Downloading Kubernetes v1.30.0 preload ...
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
📦 Preparing Kubernetes v1.30.0 on containerd 1.6.31 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
The request
curl --resolve waf-demo.localdev.me:8080:127.0.0.1 http://waf-demo.localdev.me:8080/?id=1+union+select+1,2,3/*
And the log, everything works as expected unique_id=request_id
2024/04/24 13:19:01 [error] 775#775: *7024 [client 127.0.0.1] ModSecurity: Access denied with code 403 (phase 2). Matched "Operator `Ge' with parameter `5' against variable `TX:ANOMALY_SCORE' (Value: `15' ) [file "/etc/nginx/owasp-modsecurity-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "81"] [id "949110"] [rev ""] [msg "Inbound Anomaly Score Exceeded (Total Score: 15)"] [data ""] [severity "2"] [ver "OWASP_CRS/3.3.5"] [maturity "0"] [accuracy "0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "127.0.0.1"] [uri "/"] [unique_id "41b34f98bb4e4a703d54e6597a5785a1"] [ref ""], client: 127.0.0.1, server: waf-demo.localdev.me, request: "GET /?id=1+union+select+1,2,3/* HTTP/1.1", host: "waf-demo.localdev.me:8080"
{"time": "2024-04-24T13:19:01+00:00", "remote_address": "127.0.0.1", "remote_user": "-", "request": "GET /?id=1+union+select+1,2,3/* HTTP/1.1", "response_code": "403", "referer": "-", "useragent": "curl/7.68.0", "request_length": "115", "request_time": "0.000", "proxy_upstream_uname": "default-demo-80", "proxy_alternative_upstream_name": "", "upstream_addr": "-", "upstream_response_length": "-", "upstream_response_time": "-", "upstream_status": "-", "request_id": "41b34f98bb4e4a703d54e6597a5785a1", "x-forward-for": "127.0.0.1", "uri": "/", "request_query": "id=1+union+select+1,2,3/*", "method": "GET", "http_referrer": "-", "vhost": "waf-demo.localdev.me"}
The modsecurity log dosent have the same transaction_id (unique_id in the log) as the nginx log (request_id) I have rename all "sensitive internal" info below
I have tested custom error pages in this cluster earlier (about a year ago) but after that we hade upgraded the ingress so that config is (hopefully) gone.
The ingress have the annotation
and the config in the controller have
modsecurity_transaction_id "$request_id";
set under location / for that ingressNginx log with request_id=eeb4c975-6097-4bc8-9456-e22ae7c866ce
Modsec log with unique_id=0cb9025a0230b70127cc25f7591ef443
The interesting part of annotions. In the ingress I have comment out some rule examples
kubectl exec -it -n ingress-nginx ingress-nginx-controller-8kgrl -- /nginx-ingress-controller --version
The nginx config for the ingress
kubectl exec -it -n ingress-nginx ingress-nginx-controller-8kgrl -- cat /etc/nginx/nginx.conf
Kubernetes version: kubectl version Client Version: v1.28.4 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.0
Environment:
kubectl get nodes -o wide
helm ls -A | grep -i ingress
helm -n ingress-nginx get values ingress-nginx
Current State of the controller: kubectl describe ingressclasses
kubectl -n ingress-nginx describe pod ingress-nginx-controller-8kgrl
kubectl -n ingress-nginx describe svc
kubectl get -n ingress-nginx all,ing -o wide
kubectl -n echo describe ingress waf-ingress