openshift / openshift-restclient-java

Other
78 stars 112 forks source link

Connection leak: [ ConnectionPool] okhttp3.OkHttpClient: A connection to https://<url>/ was leaked. Did you forget to close a response body? #453

Closed germanparente closed 3 years ago

germanparente commented 4 years ago

exception found in:

java.lang.Throwable: response.body().close()
    at okhttp3.internal.platform.Platform.getStackTraceForCloseable(Platform.kt:143) ~[okhttp-4.1.1.jar!/:na]
    at okhttp3.internal.connection.Transmitter.callStart(Transmitter.kt:111) ~[okhttp-4.1.1.jar!/:na]
    at okhttp3.RealCall.enqueue(RealCall.kt:77) ~[okhttp-4.1.1.jar!/:na]
    at okhttp3.internal.ws.RealWebSocket.connect(RealWebSocket.kt:162) ~[okhttp-4.1.1.jar!/:na]
    at okhttp3.OkHttpClient.newWebSocket(OkHttpClient.kt:248) ~[okhttp-4.1.1.jar!/:na]
    at com.openshift.internal.restclient.api.capabilities.PodExec.start(PodExec.java:111) ~[openshift-restclient-java-9.0.0.Final.jar!/:9.0.0.Final]

seems that this exception can be found in PodExec.java:

okhttp3.OkHttpClient.newWebSocket (before "return adapter")

a developer has shared these notes:

I have my own MyPodExec.java based on (PodExec.java). I have changed stop method to include: this.call.close(1000, "Client asking to stop."); instead of this.call.cancel(); originally from PodExec.java

@Override
        public void stop() {
            if (call != null) {
                // Avoid conection leaked
               this.call.close(1000, "Client asking to stop.");
            } else {
                shouldStop = true;
            }
        }

I added this changed based on similar existing adapter PodLogListenerAdapter from PodLogRetrievalAsync.java:

       public void stop() {
            try {
                if (this.open.get()) {
                    this.wsClient.close(1000, "Client asking to stop.");
                }
            } catch (Exception var5) {
                PodLogRetrievalAsync.LOG.debug("Unable to stop the watch client", var5);
            } finally {
                this.wsClient = null;
            }

        }

Now, after executing command on Pods and making sure the pod is closed before executing it, I stop iStoppable.stop() so the websocket is totally closed. So it was not really a response.body().close() but the websocket what was causing the issue.

That could be a way to fix this issue.

adietish commented 4 years ago

Hi @germanparente

Sorry for the late response, I was off on PTO. Great catch. I dont understand though, why okClient.newWebSocket(request, adapter) would cause the obviously existing websocket to close. I'm missing the context. My guess is that you're executing on different pods and close each pod before you execute on another one? Can you please enlighten me?

openshift-bot commented 4 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 4 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 3 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

openshift-ci-robot commented 3 years ago

@openshift-bot: Closing this issue.

In response to [this](https://github.com/openshift/openshift-restclient-java/issues/453#issuecomment-751792171): >Rotten issues close after 30d of inactivity. > >Reopen the issue by commenting `/reopen`. >Mark the issue as fresh by commenting `/remove-lifecycle rotten`. >Exclude this issue from closing again by commenting `/lifecycle frozen`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.