Closed mkluzacek closed 3 years ago
@mkluzacek You have to configure proper max idle timeout for the connections in the connection pool. Please take a look at this documentation https://projectreactor.io/docs/netty/release/reference/index.html#connection-pool-timeout
Thanks for the response. I forgot to mention that i already tried setting the reactor.netty.pool.maxLifeTime
via JAVA_OPTS like -Dreactor.netty.pool.maxLifeTime=45000
but i still got the error.
I think it should be equivalent to the settings mentioned in the docs?
@mkluzacek You need to set maxIdleTime
this is related to idle connections, the same that you have on the Tomcat (keepAliveTimeout
- The number of milliseconds this Connector will wait for another HTTP request before closing the connection. The default value is to use the value that has been set for the connectionTimeout attribute. Use a value of -1 to indicate no (i.e. infinite) timeout.
, which represents the time that the connection is idle)
OK i see now. I ll try setting the maxIdleTime
.
But should not the maxLifeTime
work as well? It is bit excesive as it could close connection that is completely fine, but should solve this problem nonetheless, or am I missing something?
Still got the error even with maxIdleTime
set to 45 secs. Double checked that the configuration is set correctly using the JAVA_OPTS
and it sets the value correctly.
@mkluzacek Can you prepare an example project that we can run in order to reproduce the issue?
Unfortunately I am not able to reproduce it with a simple example project. Must be some combination of dependencies and other factors that are causing it. Still thanks for the help.
For anyone that stumbles upon this, I ran into this issue specifically with Azure load balancers because the default load balancer settings do not send a close notification when idle timeout is hit. See more information here - ultimately I resolved this by configuring the maxIdleTime
mentioned here. I wanted to use TCP keep-alive but it is not supported by Java on Windows.
Hi @mkluzacek / @violetagg , I have come across the same issue which is happening intermittently.
We have a web flux app which makes a HTTP call to backend server hosted On-Prem using WebClient. It works most of the time, but in few scenarios we find this error "Connection Prematurely Closed BEFORE response" intermittently.
However we are unable to narrow down the specific use case when this is happening. So we tried to put a timer on the API exec. time, during which we noticed that the time taken for all the premature failures is around 40-60 milli seconds. I assume this 40-60ms is the network time from my app to the Load balancer.
@mkluzacek Were you able to resolve the issue mentioned above? If YES, could you share what it is?
@KarteekGodavarthi Did you check this FAQ page https://projectreactor.io/docs/netty/release/reference/index.html#faq.connection-closed
Hej @violetagg ,
We ran into the same issue where it says PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms. We are using webclient post call without any timedout config. On looking into above solution, it looks like we need to set maxIdleTime
where in we are trying to set in
HttpClient client = HttpClient.create(ConnectionProvider.builder("custom")
.maxIdleTime(Duration.ofSeconds(120))
.build());
WebClient webClient = WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(client))
.build();
We tried the debug possibilities mentioned in https://projectreactor.io/docs/netty/release/reference/index.html#faq.connection-closed but no luck
But on looking reactor document, they have also mentioned `pendingAcquireTimeout ` to be set and im a bit confused if i need to set this as well. Since we are not able to reproduce it locally, we need to test/try it in prod and can someone suggest if this is the right approach?
@kkajdd I think there is some misunderstanding here. As it is described in the documentation
maxIdleTime - The maximum time (resolution: ms) that this connection stays idle in the connection pool.
so this applies to connections that are idle in the connection pool.
If you want to specify timeout for the request/response then you need
HttpClient.responseTimeout(...);
Please ask your questions on Gitter or stackoverflow as it is recommended in README
I have the same problem with the latest reactor-native. Setting maxIdleTime
to large value (2 mins) doesn't help. I get this exception sooner. Has anyone figured out what it is ?
UPD: It looks like it's happening because SOCKS5 closes the connection in the middle of the http request. https://github.com/net-byte/socks5-server/issues/8
@violetagg on the maxIdleTime, would setting it to 0 (zero) essentially mean that it is closed immediately. In effect, one would not be using the pool if one sets it to zero?
@violetagg on the maxIdleTime, would setting it to 0 (zero) essentially mean that it is closed immediately. In effect, one would not be using the pool if one sets it to zero?
yes
When using webclient and calling rest api on another server(using tomcat) the webclient sometimes doesnt acknowledge the connection finish from the server and later try to reuse the already closed connection. The connection is closed by the tomcat server after 60s (default keep alive settings). Most of the times the connection gets closed correctly on the client side but sometimes it just sends [ACK] and no [ACK,FIN] and keeps the connection opened.
Expected Behavior
The client should close the connection after receiving [FIN] from the server.
Actual Behavior
The connection is not closed by the client and later reused resulting in
reactor.netty.http.client.PrematureCloseException: Connection prematurely closed BEFORE response
Full stacktrace:
Steps to Reproduce
Use webclient to call rest api on a tomcat server. We are sending around 700 requests/s, 300 000 request in total and we get 1 error mentioned before.
There are around 7000 correctly closed connections and single incorrect one so its really uncommon but we can reproduce it in 100% of cases.
This is the webclient with no special configuration:
Possible Solution
Your Environment
Spring Boot Starter Webflux: 2.4.5 Spring Boot Starter Reactor Netty: 2.4.5 Reactor Core: 3.4.5
Java version: openjdk version "11.0.6" 2020-01-14 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.6+10) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.6+10, mixed mode)
Running in docker: Linux 27e260f5943f 4.19.0-0.bpo.8-amd64 #1 SMP Debian 4.19.98-1~bpo9+1 (2020-03-09) x86_64 GNU/Linux
Here are the relevant logs: logs_closed_before.csv
And here is the tcp dump: tcp_dump.csv Client is: 172.18.0.18 Server is: 172.18.0.13