Open jurrehart opened 9 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
This was discussed on slack https://kubernetes.slack.com/archives/CANQGM8BA/p1707991629229589
While the details are unclear, a simple test with the httpbun.com/delay api shows some insight into the behaviour of the timeout related annotations. My test shows that the value of 4 seconds and the value of 2 seconds has different responses
"https://httpbun.com/delay/<seconds>"
Beware that the issue is not regarding POD responding slow. So any test with httpbun and it's delay endpoint will not be a verification of the issue. The problem faced is that the POD will close incomming connections once they have been idle for 5 seconds. So the requirement is that the NGINX controller must close it's upstream connections to the POD when they have been idle for 5s or less.
The only way I currenlty found to have NGINX behave that way is to set the global
upstream-keepalive-timeout: "4"
This will indeed have NGINX close it's upstream connections after 4 seconds of being idle. But by doing so I'm impacting all ingresses on the NGINX and that's not desired.
Now the documentation for annotations on the ingress regarding he various proxy-xxx-timeout
states
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
Reading the relative NGINX documention for the proxy_send_timeout the paramter set by the proxy-send-timeout
annotation.
Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed.
One would expect to see NGINX close the connection to the upstream once Nginx has not send any data on the upstream connection for the configured amount off time.
However if you obeserve the connetions to the upstream with tcpdump
you'll see that NGINX will reuse any open connection that has not been idle for more than 60s (default value) or the value of configured upstream-keepalive-timeout
.
I agree with you.
I am unable to figure out how to create a delay with the vanilla httpd:alpine image (or nginx:alpine image for that matter)
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
What happened:
Having web application that will close incomming connections after 5 seconds idle , without the possibility to change this setting in the app.
We set the following annotations on the ingress
But upstream connections to the pod are still kept open by nginx even after more than 4s have passed between write or read operations on the upstream connection. thus resulting in errors being logged and 502 in access logs
Error log:
What you expected to happen: Having configured the annotations to close connections with no write or read operations for more than the set time I'd expect the upstream connection to be closed when that time is exceeded.
What do you think went wrong? In the
upstream_balancer
upstream block there's akeepalive-timeout 60s;
thus instructing the upstream module to keep idle connections to the upstream server open for 60s.While the annotations generate the configurations in the server block for the ingress they are only applied to the
proxy_pass http://upstream_balancer;
thus having no impact on the upstream connections.NGINX Ingress controller version :
also noticed same behavior on an EKS setup on which I have no admin access , but seems running v 1.5.1
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: Baremetal & VMs
OS (e.g. from /etc/os-release): Ubuntu
Kernel (e.g.
uname -a
):Linux apiserver 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Basic cluster related info:
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
nginx.ingress.kubernetes.io/proxy-send-timeout: "4"