eclipse-ee4j / jersey

Eclipse Jersey Project - Read our Wiki:
https://github.com/eclipse-ee4j/jersey/wiki
Other
687 stars 344 forks source link

"Connection pool shut down" using ApacheConnectorProvider and PoolingHttpClientConnectionManager #4449

Open namedgraph opened 4 years ago

namedgraph commented 4 years ago

Just after a dozen or so requests, the client starts failing with this exception:

Exception

javax.servlet.ServletException: javax.ws.rs.ProcessingException:
java.lang.IllegalStateException: Connection pool shut down
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:410)
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)

Root Cause

javax.ws.rs.ProcessingException: java.lang.IllegalStateException:
Connection pool shut down
org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:528)
org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:296)
org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:609)
org.glassfish.jersey.internal.Errors.process(Errors.java:292)
org.glassfish.jersey.internal.Errors.process(Errors.java:274)
org.glassfish.jersey.internal.Errors.process(Errors.java:205)
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:390)
org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:608)
org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:395)
org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:295)
com.atomgraph.core.client.ClientBase.get(ClientBase.java:101)
com.atomgraph.core.client.SPARQLClient.query(SPARQLClient.java:107)
com.atomgraph.linkeddatahub.server.model.impl.ResourceBase.describe(ResourceBase.java:869)
com.atomgraph.core.model.impl.QueriedResourceBase.get(QueriedResourceBase.java:120)
com.atomgraph.server.model.impl.ResourceBase.get(ResourceBase.java:249)
com.atomgraph.linkeddatahub.server.model.impl.ResourceBase.get(ResourceBase.java:228)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469)
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391)
org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80)
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253)
org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
org.glassfish.jersey.internal.Errors.process(Errors.java:292)
org.glassfish.jersey.internal.Errors.process(Errors.java:274)
org.glassfish.jersey.internal.Errors.process(Errors.java:244)
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)

I've traced the cause down to the invocation from JDK's Finalizer:

at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.shutdown(PoolingHttpClientConnectionManager.java:408)
at org.apache.http.pool.AbstractConnPool.shutdown(AbstractConnPool.java:144)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.shutdown(PoolingHttpClientConnectionManager.java:411)
at org.apache.http.impl.client.HttpClientBuilder$2.close(HttpClientBuilder.java:1244)
at org.apache.http.impl.client.InternalHttpClient.close(InternalHttpClient.java:201)
at org.glassfish.jersey.apache.connector.ApacheConnector.close(ApacheConnector.java:554)
at org.glassfish.jersey.client.ClientRuntime.close(ClientRuntime.java:364)
at org.glassfish.jersey.client.ClientRuntime.finalize(ClientRuntime.java:341)
at java.lang.System$2.invokeFinalize(System.java:1275)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)

Seems to be the same issue as https://github.com/spotify/docker-client/issues/446

I've also found this problem (and maybe workaround) mentioned here: https://github.com/13567370952/docker-java-test/blob/master/src/main/java/com/github/dockerjava/jaxrs/JerseyDockerCmdExecFactory.java#L206

connManager = new PoolingHttpClientConnectionManager(getSchemeRegistry(
        originalUri, sslContext)) {

    @Override
    public void close() {
        super.shutdown();
    }

    @Override
    public void shutdown() {
        // Disable shutdown of the pool. This will be done later, when this factory is closed
        // This is a workaround for finalize method on jerseys ClientRuntime which
        // closes the client and shuts down the connection pool when it is garbage collected
    }
};
jansupol commented 4 years ago

Is it possible that you share the PoolingHttpClientConnectionManager among multiple Clients?

namedgraph commented 4 years ago

@jansupol not wittingly. I do have 2 Clients:

        register(new AbstractBinder()
        {
            @Override
            protected void configure()
            {
                bind(client).to(Client.class);
            }
        });
        register(new AbstractBinder()
        {
            @Override
            protected void configure()
            {
                bind(noCertClient).to(Client.class).named("NoCertClient");
            }
        });

but they are built from separate ApacheConnectorProvider/ClientConfig/PoolingHttpClientConnectionManager instances. Can they still be sharing the same connection pool?

I thought I had tried adding config.property(ApacheClientProperties.CONNECTION_MANAGER_SHARED, true); on both clients without any effect -- but now I tried again and so far the exception hasn't occurred 🤷‍♂️

namedgraph commented 4 years ago

@jansupol I think I managed to find the culprit: WebTarget.register() This blog post helped me to get on the right track: How To Use Jersey Client Efficiently. One of the comments said:

I was trying to figure out why i was having new clients created every invocation. It looked related to my use of target.property(), but it wasn't clear what the alternatives were.

I had some WebTarget.register() usages and after I removed them, the Connection pool shut down problem went away. My guess is that they were creating new ClientRuntime instances which were closed by the Finalizer and in turn were closing my Client instance as well (not sure why that should be necessary, but anyway...). This while not using config.property(ApacheClientProperties.CONNECTION_MANAGER_SHARED, true) (which lead to other problems).

I think I managed to create a reproducible JerseyTest here: https://github.com/namedgraph/jersey-test/blob/master/src/test/java/jersey/test/release_conn/ReleaseConnTest.java#L81

To cite the same comment (from 2015), this should be mentioned in the documentation:

I couldn't find any of these explicit tips in the documentation, perhaps i missed it... but if they aren't there, i would think it would be of great benefit for others if they were included in performance related section.

namedgraph commented 4 years ago

@jansupol can you confirm re. WebTarget.register()?

namedgraph commented 4 years ago

@jansupol how is this a documentation issue? The issue is with finalization when WebTarget.register() is used.

jansupol commented 4 years ago

The ConnectorProvider#getConnector says:

Get a Jersey client connector instance for a given {@link Client client} instance and Jersey client runtime {@link Configuration configuration}.

And:

Based on the supplied client and runtime configuration data, it is up to each connector provider implementation to decide whether a new dedicated connector instance is required or if the existing, previously create connector instance can be reused.

I think this basically says: For each request, a new connector instance could be created.

The issue for you is that you want to share the PoolingHttpClientConnectionManager among all the Connector instances. So you basically need to look after the PoolingHttpClientConnectionManager lifecycle, i.e. create something like:

            PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager(){
                @Override
                public void shutdown() {
                    // do nothing
                }
                public void realShutDown() {
                    super.shutdown();
                }
            };
           config.property(ApacheClientProperties.CONNECTION_MANAGER , cm);

and make sure to call realShutDown at the end.

namedgraph commented 4 years ago

@jansupol where is the sharing of the PoolingHttpClientConnectionManager happening in my test code? https://github.com/namedgraph/jersey-test/blob/master/src/test/java/jersey/test/release_conn/ReleaseConnTest.java

In any case I think what I tried is a common use case and something other people surely will stumble upon.

jansupol commented 4 years ago

@namedgraph Try to override the shutdown in your example, and you will see.

But I agree, this may be a common thing that users can hit and it ought to be properly documented.

namedgraph commented 4 years ago

@jansupol this makes client-side filters effectively useless, as I cannot use a HttpAuthenticationFeature with Client::register when it only should have WebTarget scope, and WebTarget::register cannot be used due to this issue. I have to revert to building auth headers myself 👎

namedgraph commented 4 years ago

@jansupol I still don't understand your comment about sharing the PoolingHttpClientConnectionManager.

I'm creating a single Client during Application initialization and configuring it with ApacheConnectorProvider and PoolingHttpClientConnectionManager. How should I be doing it otherwise in order to avoid this issue?

I don't care how many Connector instances are being created as long as this does not impact performance negatively, and more importantly, does not crash my application.

This approach was working fine with ThreadSafeClientConnManager in Jersey 1.19.

The actual code: https://github.com/AtomGraph/LinkedDataHub/blob/develop/src/main/java/com/atomgraph/linkeddatahub/Application.java#L849