Closed glassfishrobot closed 8 years ago
Reported by norbert.korodi
@rlubke said: Do you have a type on the close listeners within the Queue?
@rlubke said: Any update? Thanks!
norbert.korodi said: Hello Ryan, Sry for the late response: I found com.ning.http.client.providers.grizzly.GrizzlyConnectionsPool in the heapdump. Thnx for looking into the problem.
Norbert
norbert.korodi said: May I give you any other information ? Is there anything else I can do to help?
@rlubke said: Hope to be able to investigate further this week. Will follow up if more information is needed.
@rlubke said: Norbert,
Would you mind trying this updated version [1] of AHC from the head of the 1.8.x branch?
[1] https://www.dropbox.com/s/xuob9q3lfh4oycl/async-http-client-1.8.17-SNAPSHOT.jar?dl=0
norbert.korodi said: Thank you Ryan! I will try it at the end of this month, because I am on holiday for a couple of weeks.
norbert.korodi said: Investigated the problem and the jar file and I noticed that I had no async-http-client*jar in my war , only grizzly-http-client-1.8.jar. I tried with both of them inside the archive and with only yours , both cases resulted in : SEVERE: Socket accept failed java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:793) at java.lang.Thread.run(Thread.java:745) So I increased the available number of sockets and decreased the cooldown time of theirs, but these did not bring any help. Was I doing something wrong?
northbright said: Today, I managed to get it working, and the same misbehavour came back. I removed the WEB-INF\lib\grizzly-http-client-1.8.jar and added yours (WEB-INF\lib\async-http-client-1.8.17-SNAPSHOT.jar)
@rlubke said: Sorry for the delay, Norbert. We've been on break until this week.
Disappointed to hear the patch didn't help. I'm curious though, is is possible for you to try AHC 1.9?
norbert.korodi said: No problem Ryan and thnx for the help.
I will try AHC 1.9 today.
norbert.korodi said: Thank You Ryan, version 1.9 worked fine. All my tests (including stability) ran fine. Specific versions and deps: grizzly-websockets-2.3.22.jar grizzly-http-client-1.9.jar grizzly-http-2.3.22.jar grizzly-framework-2.3.22.jar
@rlubke said: Good to hear! Do you still want to pursue a resolution for 1.8 or should we close this one out?
norbert.korodi said: I would like to, but I don't have more time to burn for this pursue. Please close this one out.
@rlubke said: Migrating to AHC 1.9 resolved the issue. Unsure of the root cause in 1.8.
This issue was imported from java.net JIRA GRIZZLY-1813
Marked as complete on Thursday, January 7th 2016, 8:54:12 am
In this infrastructure (descriped above , sry, I can not be more precise) we used the jersey client 2 from type X Computer to reach Y typed computers and the connector provider was set to Grizzly. Luckily all the connections that we needed used the same TCP sessions which has been created and set for them, but after a while (15 - 30 min) performance started to drop drammatically. I've investigated the problem and the heap dumps showed that in the NIOConnection class there is a field called closeListeners of type ConcurrentLinkedQueue. This is the field which caused the problem by growing too large.
Environment
Infrastructure: http request > HA proxy -> N number of element of computers type X (running tomcat @ opendjk)> HA proxy -> M number of elements computers, type Y (running tomcat @ opendjk).