Open HaniBikdeli opened 9 months ago
are there any steps for reproduction?
@shreemaan-abhishek there is really nothing special we ran an apisix instance with docker configured an upstream with 10 nodes in it configured some routes using said upstream started consuming the web service using those routes
note that the same thing was working fine a couple days ago
my main goal for requesting help is that im looking for ways to troubleshoot said problem
what's the log level? Any information on the CPU usage/Network or Disk IO stats? What's the volume of the requests when this failure occurs?
what's the log level? Any information on the CPU usage/Network or Disk IO stats? What's the volume of the requests when this failure occurs?
everything is ok . ngnix log level? each 1 min get 5 connection close or transaction timeout .
Hi @azizkhani. Your problem persist? Has you been capable of determine the cause of the connection's close? We have a similar behaviour but running on Kubernetes (Apisix 3.10)
Description
Hi there
We have a RESTFull application with an approximate tps of 2000 we also have an apisix instance running on a docker container on Rocky linux with an upstream configured with 10 nodes.
However a number of our requests are faced with a connection timeout or channel closed error and this happens even though sending a request without using an api gateway goes through fine. Now there is no trace of this in apisix error and access logs.
How should i troubleshoot and deal with this problem? Could this problem be related to the Keepalive Pool config in my upstream?
Environment
apisix version
): 3.2uname -a
): Rockyopenresty -V
ornginx -V
):curl http://127.0.0.1:9090/v1/server_info
):luarocks --version
): (I cant get you all of this rn but i will)