Closed zhaowenxishi closed 3 years ago
This is normal behavior.
The client may need to create a new connection in response to a new request being sent, typically during a load spike. However, it may happen that by the time the connection is created, a previous connection frees up and the request is sent in the freed up connection rather than in the newly created one.
So the newly created connection goes back into the pool to possibly serve other requests. In your case the newly created connection remains idle in the pool.
Your client configuration has an idle timeout of 300000 ms i.e. 5 minutes. If the server (nginx) has an idle timeout of 90 seconds, then you risk that the client picks up a connection that is about to be closed by the server. This is also a very common case, and it's enough to configure the client with an idle timeout that is smaller than the server.
Configure your client with a 75 seconds idle timeout if your server has 90 seconds, and all will be good (and don't worry about these spurious connections that are being created during load spikes).
Let know if the configuration changes resolved your issue.
Thanks a lot , I will try it later. By the way, I also want to ask a question about the "timeout". I use FutureResponseListener to handle a "so called Synchronous request", we need to set a "timeout" which counts the waiting time for response after request. In a addition, jetty also has a "idletimeout" which indicates the time period that nothing transport on the channel. For your suggestions, I can set ildetimeout to 75 seconds(need smaller than server-nginx 90s). But what if I do need a timeout=90s or larger beacuse the Server-end has plenty of tasks to handle. In this situation, i have to set "timeout"(FutureResponseListener)=90s. Then "timeout" is smaller than "idletimeout". I've tested the situation(timeout<idletimeout) before. Finally the "idletimeout" will work beacuse it's smaller, and I got a TimeoutExceptoin(75s). The two TIMEOUT seem to not independent and it's hard to have a balance.
FutureResponseListener
has the semantic of a total timeout and it is equivalent to Request.timeout(...)
. Both account for the total time request + response take to complete.
But what if I do need a timeout=90s or larger beacuse the Server-end has plenty of tasks to handle.
Then your Nginx configuration is too small. And if it takes more than 90s for a request to complete, you have other problems than the idle timeout :smiley:
You can use Request.idleTimeout(...)
but you will be at the mercy of the Nginx idle timeout.
Total timeout and idle timeout serve different purposes and are orthogonal (and therefore independent).
The "balance" you are looking for is entirely dependent on your application. Jetty can only give you configuration parameters, the rest is up to you as Jetty cannot know what your application does.
Thanks a lot. But I may still don't understand clearly.
I think there is a gap between jetty-client and nginx about the definitions of some TIMEOUT. My situation is like this.
JETTY(client)<----------->NGINX(reverse proxy)<---------->TOMCAT(server)
JETTY: (1)totalTimeout (2)idletimeout NGINX: (1)proxy_read_timeout:Defines a timeout for reading a response from the proxied server. As client-end developer, I set jetty-totalTimeout at the merce of this. (2)keepalive_timeout:during which a keep-alive client connection will stay open on the server side. As client-end developer, I set jetty-totalTimeout at the merce of this. I don't know the details of designing before,but we finally set proxy_read_timeout=500s, keepalive_timeout=90s maybe based on overall efficiency consideration
Now when I'm developing and I know the server has to handle each request for about 100s ~ 200s, at the merce of your suggestions and all above. The rule are here (1) idletimeout < keepalive_timeout(90s). Why? Beacuse if idletimeout > keepalive_timeout, it will occasionally occur EOFException/EofException due to the nginx's unilateral discconnect. Just like the situation I mentioned at the beginning of this issue. (2) (100s ~ 200s) < totalTimeout < proxy_read_timeout(500s)
So I can only set the jetty timeout like this: idletimeout=75s totalTimeout=210s As a result, every time I send a request, I will acctually get a TIMEOUTEXCEPTION of 75s(idletimeout). "totalTimeout" don't work. But if set the "idletimeout" larger, it will cause EOF sometimes.
By the way, your suggestion to set the idletimeout to 75s works well temporarily. The client-end service has run two days without any EOF. I will keep observing the service. So this also proves that it's important to set idletimeout(jetty) smaller than keepalive_timeout(nginx) or some other timeout of interaction such as the "client_header_timeout" I mentioned above.
I know the server has to handle each request for about 100s ~ 200s
That is your problem.
You cannot expect the server to go silent for 100-200 seconds when the client idle timeout is 75 seconds.
You must adjust all your timeouts so that the client timeouts are less than the server timeouts, both in the Jetty client and in the proxy.
But IMHO you have to solve the issue of a single request taking 200 seconds, as that seems an incredible large amount of time.
If you have such long computations, you don't typically use HTTP, but instead messaging systems that notifies you when a computation is finished.
This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions.
Closing, not a Jetty issue.
It seems that the jetty doest not remove the connection from the pool when the connection is closed by the server side. Jetty reuse the connection closed by the server side for the new request, then cause the EOF error and remove the connection from the pool. Why jetty does not maintain the TCP connection state?
@wei8jie what you said in the comment above is not accurate: Jetty does remove the connection from the pool when closed by the server. We have tests for that specific behavior.
This issue is closed, and apparently not related to your comment.
Please open a new issue, attach a reproducer and as much information you can.
Jetty version 9.4.19.v20190610
Java version
OS type/version
Description We use jetty client api to send request, connection created quickly(in TCP level) and then paused for a long time in DuplexConnectionPool after created. The quest is send to nginx(revers proxy). Nginx has a limit "client_header_timeout"(for example 90s). Jetty-client and nginx got handshake in TCP level quickly , but jetty-client say "client hello" in 90s after TCP handshake, so nginx give up the connection.Finally, jetty-client result in a "EOF”. Here are the debug logs below: