ktorio / ktor

Framework for quickly creating connected applications in Kotlin with minimal effort
https://ktor.io
Apache License 2.0
13.09k stars 1.07k forks source link

Occasionally Ktor utilizes 100% CPU without any load after failed request. #1433

Closed LoneEngineer closed 3 years ago

LoneEngineer commented 5 years ago

Ktor Version and Engine Used (client or server and name) Our application uses both - Ktor server and client. Versions are ktor: 1.2.3 kotlin: 1.3.50 server's engine: Netty client's engine: CIO

        // http server
        implementation "io.ktor:ktor-server-core:$ktor_version"
        implementation "io.ktor:ktor-server-netty:$ktor_version"
        implementation "io.ktor:ktor-server-host-common:$ktor_version"
        implementation "io.ktor:ktor-server-sessions:$ktor_version"
        implementation "io.ktor:ktor-auth:$ktor_version"
        implementation "io.ktor:ktor-jackson:$ktor_version"
        implementation "io.ktor:ktor-auth-jwt:$ktor_version"
        implementation "io.ktor:ktor-locations:$ktor_version"
        // http client
        implementation "io.ktor:ktor-client-core:$ktor_version"
        implementation "io.ktor:ktor-client-core-jvm:$ktor_version"
        implementation "io.ktor:ktor-client-apache:$ktor_version"
        implementation "io.ktor:ktor-client-cio:$ktor_version"
        implementation "io.ktor:ktor-client-json:$ktor_version"
        implementation "io.ktor:ktor-client-json-jvm:$ktor_version"
        implementation "io.ktor:ktor-client-jackson:$ktor_version"
        implementation "io.ktor:ktor-client-logging:$ktor_version"
        implementation "io.ktor:ktor-client-logging-jvm:$ktor_version"
        implementation "io.ktor:ktor-client-auth:$ktor_version"
        implementation "io.ktor:ktor-client-auth-jvm:$ktor_version"

Run on a VPC: Linux Debian 9.11

Describe the bug Monitoring reported that our server consumes 100%. The server does nothing (it's test environment and issue happened later evening) but shows high CPU usage. In jstack, I found only one suspicious thread:

"Thread-8@16328" daemon prio=10 tid=0x6f nid=NA runnable
  java.lang.Thread.State: RUNNABLE
      at sun.nio.ch.EPoll.wait(EPoll.java:-1)
      at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
      at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
      - locked <0x3ff5> (a sun.nio.ch.EPollSelectorImpl)
      - locked <0x403d> (a sun.nio.ch.Util$2)
      at sun.nio.ch.SelectorImpl.selectNow(SelectorImpl.java:146)
      at io.ktor.network.selector.ActorSelectorManager.process(ActorSelectorManager.kt:81)
      at io.ktor.network.selector.ActorSelectorManager$process$1.invokeSuspend(ActorSelectorManager.kt:-1)
      at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
      at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
      at java.lang.Thread.run(Thread.java:834)

Strace shows that some thread is performing a lot of epolls (all with same fd) and these polls are finished at once:

[pid  5938] 23:07:27.637413 epoll_wait(63, [], 1024, 0) = 0 <0.000014>
[pid  5938] 23:07:27.637473 epoll_wait(63, [], 1024, 0) = 0 <0.000016>
[pid  5938] 23:07:27.637538 epoll_wait(63, [], 1024, 0) = 0 <0.000015>
[pid  5938] 23:07:27.637598 epoll_wait(63, [], 1024, 0) = 0 <0.000043>
[pid  5938] 23:07:27.637687 epoll_wait(63, [], 1024, 0) = 0 <0.000016>
[pid  5938] 23:07:27.637739 epoll_wait(63, [], 1024, 0) = 0 <0.000015>
[pid  5938] 23:07:27.637799 epoll_wait(63, [], 1024, 0) = 0 <0.000015>
[pid  5938] 23:07:27.637860 epoll_wait(63, [], 1024, 0) = 0 <0.000016>

lsof shows: java 3405 root 63u a_inode 0,13 0 17139 [eventpoll]

To Reproduce According to monitoring system, usage spike happened at same time then an user tried to logout using expired session. In our system it will lead to throwing an exception and handing it with status page:

    install(StatusPages) {
    ...
        Exceptions.apply {
            httpExceptions(testing)
        }
    }

    @KtorExperimentalAPI
    fun StatusPages.Configuration.httpExceptions(testing: Boolean) {
        exception<HttpError> {
            logger.warn("${this.context.request.uri} - failed due to ${it.message}${it.cause?.let {"(caused by $it)"}}", it)
            if (it.code == HttpStatusCode.Unauthorized) {
                call.unauthorized(it.body)
            } else {
                it.body?.let { body ->
                    call.respond(it.code, body)
                } ?: call.respond(it.code)
            }
        }
        ... 

@KtorExperimentalAPI
suspend fun ApplicationCall.unauthorized(maybeError: HttpErrorBody? = null): Unit {
    // set WWW-Authenticate (as per RFC required for 401 status)
    val realm = application.environment.config.property("authentication.realm").getString()
    val header = HttpAuthHeader.Parameterized(AuthenticationScheme, mapOf(HttpAuthHeader.Parameters.Realm to realm))
    response.headers.append(HttpHeaders.WWWAuthenticate, header.toString())
    // clear session
    sessions.clear<MySessionCookie>()
    // return status code and body
    maybeError?.let { body ->
        respond(HttpStatusCode.Unauthorized, body)
    } ?: respond(HttpStatusCode.Unauthorized)
}

In log, we have:

2019-11-08 19:31:58,277 DEBUG [nioEventLoopGroup-4-1][172.26.0.8][REQ-496] auth - session 8670142f-97a2-4035-9f17-e62774e0a7c5 has expired
2019-11-08 19:31:58,277 INFO  [nioEventLoopGroup-4-1][172.26.0.8][REQ-496] application - finished POST /web/v1/logout with null
2019-11-08 19:31:58,278 WARN  [nioEventLoopGroup-4-1][172.26.0.8][REQ-496] application - /web/v1/logout - failed due to SessionExpired com.example.http.HttpError$UnauthorizedAccess: SessionExpired
        at com.example.ApplicationKt$module$12$$special$$inlined$session$lambda$1.invokeSuspend(Application.kt:297)
        at com.example.ApplicationKt$module$12$$special$$inlined$session$lambda$1.invoke(Application.kt)
        at com.example.ApplicationKt$module$12$$special$$inlined$session$1.invokeSuspend(SessionAuth.kt:156)
        at com.example.ApplicationKt$module$12$$special$$inlined$session$1.invoke(SessionAuth.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.util.pipeline.SuspendFunctionGun.execute(PipelineContext.kt:161)
        at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:27)
        at io.ktor.auth.Authentication.processAuthentication(Authentication.kt:228)
        at io.ktor.auth.Authentication$interceptPipeline$2.invokeSuspend(Authentication.kt:123)
        at io.ktor.auth.Authentication$interceptPipeline$2.invoke(Authentication.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.util.pipeline.SuspendFunctionGun.execute(PipelineContext.kt:161)
        at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:27)
        at io.ktor.routing.Routing.executeResult(Routing.kt:147)
        at io.ktor.routing.Routing.interceptor(Routing.kt:34)
        at io.ktor.routing.Routing$Feature$install$1.invokeSuspend(Routing.kt:99)
        at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.features.ContentNegotiation$Feature$install$1.invokeSuspend(ContentNegotiation.kt:106)
        at io.ktor.features.ContentNegotiation$Feature$install$1.invoke(ContentNegotiation.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.features.StatusPages$interceptCall$2.invokeSuspend(StatusPages.kt:98)
        at io.ktor.features.StatusPages$interceptCall$2.invoke(StatusPages.kt)
        at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:91)
        at kotlinx.coroutines.CoroutineScopeKt.coroutineScope(CoroutineScope.kt:180)
        at io.ktor.features.StatusPages.interceptCall(StatusPages.kt:97)
        at io.ktor.features.StatusPages$Feature$install$2.invokeSuspend(StatusPages.kt:137)
        at io.ktor.features.StatusPages$Feature$install$2.invoke(StatusPages.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
         at io.ktor.features.CallLogging$Feature$install$1$invokeSuspend$$inlined$withMDC$1.invokeSuspend(CallLogging.kt:226)
        at io.ktor.features.CallLogging$Feature$install$1$invokeSuspend$$inlined$withMDC$1.invoke(CallLogging.kt)
        at kotlinx.coroutines.intrinsics.UndispatchedKt.startUndispatchedOrReturn(Undispatched.kt:91)
        at kotlinx.coroutines.BuildersKt__Builders_commonKt.withContext(Builders.common.kt:156)
        at kotlinx.coroutines.BuildersKt.withContext(Unknown Source)
        at io.ktor.features.CallLogging$Feature$install$1.invokeSuspend(CallLogging.kt:230)
        at io.ktor.features.CallLogging$Feature$install$1.invoke(CallLogging.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.util.pipeline.SuspendFunctionGun.execute(PipelineContext.kt:161)
        at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:27)
        at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$2.invokeSuspend(DefaultEnginePipeline.kt:118)
        at io.ktor.server.engine.DefaultEnginePipelineKt$defaultEnginePipeline$2.invoke(DefaultEnginePipeline.kt)
        at io.ktor.util.pipeline.SuspendFunctionGun.loop(PipelineContext.kt:268)
        at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(PipelineContext.kt:67)
        at io.ktor.util.pipeline.SuspendFunctionGun.proceed(PipelineContext.kt:141)
        at io.ktor.util.pipeline.SuspendFunctionGun.execute(PipelineContext.kt:161)
        at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:27)
        at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invokeSuspend(NettyApplicationCallHandler.kt:36)
        at io.ktor.server.netty.NettyApplicationCallHandler$handleRequest$1.invoke(NettyApplicationCallHandler.kt)
        at kotlinx.coroutines.intrinsics.UndispatchedKt.startCoroutineUndispatched(Undispatched.kt:55)
        at kotlinx.coroutines.CoroutineStart.invoke(CoroutineStart.kt:111)
        at kotlinx.coroutines.AbstractCoroutine.start(AbstractCoroutine.kt:154)
        at kotlinx.coroutines.BuildersKt__Builders_commonKt.launch(Builders.common.kt:54)
        at kotlinx.coroutines.BuildersKt.launch(Unknown Source)
        at io.ktor.server.netty.NettyApplicationCallHandler.handleRequest(NettyApplicationCallHandler.kt:26)
        at io.ktor.server.netty.NettyApplicationCallHandler.channelRead(NettyApplicationCallHandler.kt:20)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:56)
        at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:365)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:834)

To me, It looks very similar to: https://github.com/ktorio/ktor/issues/1041 But it was about client, while we've issue with a server.

Expected behavior Doesn't consume CPU without load.

e5l commented 5 years ago

Hi @LoneEngineer, thanks for the report!

larslorenzen commented 4 years ago

Also happens with ktor 1.3.2 and kotlin 1.3.70 and CIO engine.

We switched to Netty engine because of this.

Only exception i see in the logs is

Exception in thread "ktor-cio-dispatcher-worker-3" java.lang.IllegalStateException: Unable to stop writing in state Terminated
    at io.ktor.utils.io.internal.ReadWriteBufferState.stopWriting$ktor_io(ReadWriteBufferState.kt:22)
    at io.ktor.utils.io.ByteBufferChannel.restoreStateAfterWrite$ktor_io(ByteBufferChannel.kt:252)
    at io.ktor.utils.io.internal.WriteSessionImpl.complete(WriteSessionImpl.kt:33)
    at io.ktor.utils.io.ByteBufferChannel.writeSuspendSession$suspendImpl(ByteBufferChannel.kt:1911)
    at io.ktor.utils.io.ByteBufferChannel$writeSuspendSession$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:172)
    at io.ktor.utils.io.ByteBufferChannel.resumeWriteOp(ByteBufferChannel.kt:2213)
    at io.ktor.utils.io.ByteBufferChannel.tryCompleteJoining(ByteBufferChannel.kt:352)
    at io.ktor.utils.io.ByteBufferChannel.copyDirect$ktor_io(ByteBufferChannel.kt:1374)
    at io.ktor.utils.io.ByteBufferChannel$copyDirect$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Exception in thread "ktor-cio-dispatcher-worker-2" java.lang.IllegalStateException: Unable to stop writing in state Terminated
    at io.ktor.utils.io.internal.ReadWriteBufferState.stopWriting$ktor_io(ReadWriteBufferState.kt:22)
    at io.ktor.utils.io.ByteBufferChannel.restoreStateAfterWrite$ktor_io(ByteBufferChannel.kt:252)
    at io.ktor.utils.io.internal.WriteSessionImpl.complete(WriteSessionImpl.kt:33)
    at io.ktor.utils.io.ByteBufferChannel.writeSuspendSession$suspendImpl(ByteBufferChannel.kt:1911)
    at io.ktor.utils.io.ByteBufferChannel$writeSuspendSession$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:172)
    at io.ktor.utils.io.ByteBufferChannel.resumeWriteOp(ByteBufferChannel.kt:2213)
    at io.ktor.utils.io.ByteBufferChannel.tryCompleteJoining(ByteBufferChannel.kt:352)
    at io.ktor.utils.io.ByteBufferChannel.copyDirect$ktor_io(ByteBufferChannel.kt:1374)
    at io.ktor.utils.io.ByteBufferChannel$copyDirect$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Exception in thread "ktor-cio-dispatcher-worker-3" java.lang.IllegalStateException: Unable to stop writing in state Terminated
    at io.ktor.utils.io.internal.ReadWriteBufferState.stopWriting$ktor_io(ReadWriteBufferState.kt:22)
    at io.ktor.utils.io.ByteBufferChannel.restoreStateAfterWrite$ktor_io(ByteBufferChannel.kt:252)
    at io.ktor.utils.io.internal.WriteSessionImpl.complete(WriteSessionImpl.kt:33)
    at io.ktor.utils.io.ByteBufferChannel.writeSuspendSession$suspendImpl(ByteBufferChannel.kt:1911)
    at io.ktor.utils.io.ByteBufferChannel$writeSuspendSession$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:172)
    at io.ktor.utils.io.ByteBufferChannel.resumeWriteOp(ByteBufferChannel.kt:2213)
    at io.ktor.utils.io.ByteBufferChannel.tryTerminate$ktor_io(ByteBufferChannel.kt:365)
    at io.ktor.utils.io.ByteBufferChannel.close(ByteBufferChannel.kt:126)
    at io.ktor.utils.io.ByteBufferChannel.cancel(ByteBufferChannel.kt:153)
    at io.ktor.utils.io.ByteBufferChannel$attachJob$1.invoke(ByteBufferChannel.kt:73)
    at io.ktor.utils.io.ByteBufferChannel$attachJob$1.invoke(ByteBufferChannel.kt:24)
    at kotlinx.coroutines.InvokeOnCancelling.invoke(JobSupport.kt:1464)
    at kotlinx.coroutines.JobSupport.notifyCancelling(JobSupport.kt:1510)
    at kotlinx.coroutines.JobSupport.tryMakeCancelling(JobSupport.kt:792)
    at kotlinx.coroutines.JobSupport.makeCancelling(JobSupport.kt:752)
    at kotlinx.coroutines.JobSupport.cancelImpl$kotlinx_coroutines_core(JobSupport.kt:668)
    at kotlinx.coroutines.JobSupport.cancelInternal(JobSupport.kt:629)
    at kotlinx.coroutines.JobSupport.cancel(JobSupport.kt:614)
    at io.ktor.utils.io.ChannelJob.cancel(Coroutines.kt)
    at kotlinx.coroutines.Job$DefaultImpls.cancel$default(Job.kt:164)
    at io.ktor.network.sockets.NIOSocketImpl.close(NIOSocket.kt:63)
    at io.ktor.client.engine.cio.Endpoint$makeDedicatedRequest$1$1.invoke(Endpoint.kt:103)
    at io.ktor.client.engine.cio.Endpoint$makeDedicatedRequest$1$1.invoke(Endpoint.kt:22)
    at kotlinx.coroutines.InvokeOnCompletion.invoke(JobSupport.kt:1386)
    at kotlinx.coroutines.JobSupport.notifyCompletion(JobSupport.kt:1529)
    at kotlinx.coroutines.JobSupport.completeStateFinalization(JobSupport.kt:323)
    at kotlinx.coroutines.JobSupport.finalizeFinishingState(JobSupport.kt:240)
    at kotlinx.coroutines.JobSupport.continueCompleting(JobSupport.kt:932)
    at kotlinx.coroutines.JobSupport.access$continueCompleting(JobSupport.kt:28)
    at kotlinx.coroutines.JobSupport$ChildCompletion.invoke(JobSupport.kt:1152)
    at kotlinx.coroutines.JobSupport.notifyCompletion(JobSupport.kt:1529)
    at kotlinx.coroutines.JobSupport.completeStateFinalization(JobSupport.kt:323)
    at kotlinx.coroutines.JobSupport.finalizeFinishingState(JobSupport.kt:240)
    at kotlinx.coroutines.JobSupport.tryMakeCompletingSlowPath(JobSupport.kt:903)
    at kotlinx.coroutines.JobSupport.tryMakeCompleting(JobSupport.kt:860)
    at kotlinx.coroutines.JobSupport.makeCompletingOnce$kotlinx_coroutines_core(JobSupport.kt:825)
    at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:111)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Exception in thread "ktor-cio-dispatcher-worker-3" java.lang.IllegalStateException: Unable to stop writing in state Terminated
    at io.ktor.utils.io.internal.ReadWriteBufferState.stopWriting$ktor_io(ReadWriteBufferState.kt:22)
    at io.ktor.utils.io.ByteBufferChannel.restoreStateAfterWrite$ktor_io(ByteBufferChannel.kt:252)
    at io.ktor.utils.io.internal.WriteSessionImpl.complete(WriteSessionImpl.kt:33)
    at io.ktor.utils.io.ByteBufferChannel.writeSuspendSession$suspendImpl(ByteBufferChannel.kt:1911)
    at io.ktor.utils.io.ByteBufferChannel$writeSuspendSession$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:172)
    at io.ktor.utils.io.ByteBufferChannel.resumeWriteOp(ByteBufferChannel.kt:2213)
    at io.ktor.utils.io.ByteBufferChannel.tryCompleteJoining(ByteBufferChannel.kt:352)
    at io.ktor.utils.io.ByteBufferChannel.copyDirect$ktor_io(ByteBufferChannel.kt:1374)
    at io.ktor.utils.io.ByteBufferChannel$copyDirect$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
Exception in thread "ktor-cio-dispatcher-worker-1" java.lang.IllegalStateException: Unable to stop writing in state Terminated
    at io.ktor.utils.io.internal.ReadWriteBufferState.stopWriting$ktor_io(ReadWriteBufferState.kt:22)
    at io.ktor.utils.io.ByteBufferChannel.restoreStateAfterWrite$ktor_io(ByteBufferChannel.kt:252)
    at io.ktor.utils.io.internal.WriteSessionImpl.complete(WriteSessionImpl.kt:33)
    at io.ktor.utils.io.ByteBufferChannel.writeSuspendSession$suspendImpl(ByteBufferChannel.kt:1911)
    at io.ktor.utils.io.ByteBufferChannel$writeSuspendSession$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedContinuation.resumeWith(DispatchedContinuation.kt:172)
    at io.ktor.utils.io.ByteBufferChannel.resumeWriteOp(ByteBufferChannel.kt:2213)
    at io.ktor.utils.io.ByteBufferChannel.tryCompleteJoining(ByteBufferChannel.kt:352)
    at io.ktor.utils.io.ByteBufferChannel.copyDirect$ktor_io(ByteBufferChannel.kt:1374)
    at io.ktor.utils.io.ByteBufferChannel$copyDirect$1.invokeSuspend(ByteBufferChannel.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

don't know if it is related.

thread dump:

2020-03-13 16:30:09
Full thread dump OpenJDK 64-Bit Server VM (11.0.6+9-jvmci-20.0-b02 mixed mode, sharing):

Threads class SMR info:
_java_thread_list=0x00007fe5ec001d90, length=40, elements={
0x00007fe618025800, 0x00007fe618060800, 0x00007fe618062800, 0x00007fe618068800,
0x00007fe61806a800, 0x00007fe61806d000, 0x00007fe61806f000, 0x00007fe61808f800,
0x00007fe6180d6000, 0x00007fe618a57000, 0x00007fe618a58000, 0x00007fe5c4003800,
0x00007fe5c41cd800, 0x00007fe5c41db000, 0x00007fe5c4229800, 0x00007fe5c422b000,
0x00007fe59c006800, 0x00007fe5ac025800, 0x00007fe59c014800, 0x00007fe5ac027000,
0x00007fe59c016000, 0x00007fe5ac029000, 0x00007fe5b4001000, 0x00007fe5b000a000,
0x00007fe58c004800, 0x00007fe59000d800, 0x00007fe5c46c7000, 0x00007fe5780dd000,
0x00007fe5c4c99000, 0x00007fe5c4c9a000, 0x00007fe5c4c2f800, 0x00007fe5c0010800,
0x00007fe5c0012000, 0x00007fe58807a000, 0x00007fe54c005800, 0x00007fe5c4d10000,
0x00007fe568022800, 0x00007fe548009000, 0x00007fe57812f000, 0x00007fe5ec001000
}

"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=22.55ms elapsed=17014.40s tid=0x00007fe618060800 nid=0x13 waiting on condition  [0x00007fe61cf42000]
   java.lang.Thread.State: RUNNABLE
    at java.lang.ref.Reference.waitForReferencePendingList(java.base@11.0.6/Native Method)
    at java.lang.ref.Reference.processPendingReferences(java.base@11.0.6/Reference.java:241)
    at java.lang.ref.Reference$ReferenceHandler.run(java.base@11.0.6/Reference.java:213)

"Finalizer" #3 daemon prio=8 os_prio=0 cpu=0.91ms elapsed=17014.40s tid=0x00007fe618062800 nid=0x14 in Object.wait()  [0x00007fe61ce41000]
   java.lang.Thread.State: WAITING (on object monitor)
    at java.lang.Object.wait(java.base@11.0.6/Native Method)
    - waiting on <0x00000000e22b9af0> (a java.lang.ref.ReferenceQueue$Lock)
    at java.lang.ref.ReferenceQueue.remove(java.base@11.0.6/ReferenceQueue.java:155)
    - waiting to re-lock in wait() <0x00000000e22b9af0> (a java.lang.ref.ReferenceQueue$Lock)
    at java.lang.ref.ReferenceQueue.remove(java.base@11.0.6/ReferenceQueue.java:176)
    at java.lang.ref.Finalizer$FinalizerThread.run(java.base@11.0.6/Finalizer.java:170)

"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.27ms elapsed=17014.40s tid=0x00007fe618068800 nid=0x15 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"JVMCI-native CompilerThread0" #5 daemon prio=9 os_prio=0 cpu=33321.75ms elapsed=17014.40s tid=0x00007fe61806a800 nid=0x16 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

"C1 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=6243.83ms elapsed=17014.40s tid=0x00007fe61806d000 nid=0x17 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

"Sweeper thread" #7 daemon prio=9 os_prio=0 cpu=5294.44ms elapsed=17014.40s tid=0x00007fe61806f000 nid=0x18 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"Common-Cleaner" #8 daemon prio=8 os_prio=0 cpu=10.50ms elapsed=17014.38s tid=0x00007fe61808f800 nid=0x19 in Object.wait()  [0x00007fe61c3e8000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(java.base@11.0.6/Native Method)
    - waiting on <no object reference available>
    at java.lang.ref.ReferenceQueue.remove(java.base@11.0.6/ReferenceQueue.java:155)
    - waiting to re-lock in wait() <0x00000000e22ba168> (a java.lang.ref.ReferenceQueue$Lock)
    at jdk.internal.ref.CleanerImpl.run(java.base@11.0.6/CleanerImpl.java:148)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)
    at jdk.internal.misc.InnocuousThread.run(java.base@11.0.6/InnocuousThread.java:134)

"Service Thread" #9 daemon prio=9 os_prio=0 cpu=13781.55ms elapsed=17014.32s tid=0x00007fe6180d6000 nid=0x1a runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"DefaultDispatcher-worker-1" #11 daemon prio=5 os_prio=0 cpu=9194.79ms elapsed=17012.87s tid=0x00007fe618a57000 nid=0x1c waiting on condition  [0x00007fe5c93fe000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park(CoroutineScheduler.kt:783)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark(CoroutineScheduler.kt:728)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:711)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"DefaultDispatcher-worker-2" #12 daemon prio=5 os_prio=0 cpu=3217.61ms elapsed=17012.87s tid=0x00007fe618a58000 nid=0x1d runnable  [0x00007fe5c92fd000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e3b38550> (a sun.nio.ch.Util$2)
    - locked <0x00000000e3b384f8> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:136)
    at io.ktor.network.selector.ActorSelectorManager.select(ActorSelectorManager.kt:97)
    at io.ktor.network.selector.ActorSelectorManager$select$1.invokeSuspend(ActorSelectorManager.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"DefaultDispatcher-worker-3" #13 daemon prio=5 os_prio=0 cpu=4232.32ms elapsed=17012.86s tid=0x00007fe5c4003800 nid=0x1e waiting on condition  [0x00007fe5c91fc000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park(CoroutineScheduler.kt:783)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark(CoroutineScheduler.kt:728)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:711)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"async-channel-group-0-selector" #14 daemon prio=5 os_prio=0 cpu=11187.80ms elapsed=17012.19s tid=0x00007fe5c41cd800 nid=0x1f runnable  [0x00007fe5c84fb000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e2ababf0> (a sun.nio.ch.Util$2)
    - locked <0x00000000e2abab98> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:141)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.loop(AsynchronousTlsChannelGroup.java:398)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.access$300(AsynchronousTlsChannelGroup.java:67)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup$2.run(AsynchronousTlsChannelGroup.java:188)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"Thread-4" #15 daemon prio=5 os_prio=0 cpu=9.37ms elapsed=17012.18s tid=0x00007fe5c41db000 nid=0x20 runnable  [0x00007fe5c83fa000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e2abae40> (a sun.nio.ch.Util$2)
    - locked <0x00000000e2abade8> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:141)
    at com.mongodb.connection.TlsChannelStreamFactoryFactory$SelectorMonitor$1.run(TlsChannelStreamFactoryFactory.java:142)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"async-channel-group-1-selector" #17 daemon prio=5 os_prio=0 cpu=1646.66ms elapsed=17012.10s tid=0x00007fe5c4229800 nid=0x22 runnable  [0x00007fe5c81f8000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e2abb3b8> (a sun.nio.ch.Util$2)
    - locked <0x00000000e2abb360> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:141)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.loop(AsynchronousTlsChannelGroup.java:398)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup.access$300(AsynchronousTlsChannelGroup.java:67)
    at com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannelGroup$2.run(AsynchronousTlsChannelGroup.java:188)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"Thread-5" #18 daemon prio=5 os_prio=0 cpu=122.90ms elapsed=17012.10s tid=0x00007fe5c422b000 nid=0x23 runnable  [0x00007fe5abffe000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e2abb608> (a sun.nio.ch.Util$2)
    - locked <0x00000000e2abb5b0> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:141)
    at com.mongodb.connection.TlsChannelStreamFactoryFactory$SelectorMonitor$1.run(TlsChannelStreamFactoryFactory.java:142)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc811', description='null'}-ltc-dev-shard-00-01-aw6le.mongodb.net:27017" #20 daemon prio=5 os_prio=0 cpu=526.44ms elapsed=17012.01s tid=0x00007fe59c006800 nid=0x25 waiting on condition  [0x00007fe5abbfc000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2aa05f8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2aa07f8> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc810', description='null'}-ltc-dev-shard-00-01-aw6le.mongodb.net:27017" #21 daemon prio=5 os_prio=0 cpu=521.80ms elapsed=17012.01s tid=0x00007fe5ac025800 nid=0x26 waiting on condition  [0x00007fe5abafb000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2a9f7c8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2a9f7e0> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc811', description='null'}-ltc-dev-shard-00-00-aw6le.mongodb.net:27017" #22 daemon prio=5 os_prio=0 cpu=491.03ms elapsed=17012.00s tid=0x00007fe59c014800 nid=0x27 waiting on condition  [0x00007fe5ab9fa000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2aa34d0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2aa34e8> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc810', description='null'}-ltc-dev-shard-00-00-aw6le.mongodb.net:27017" #23 daemon prio=5 os_prio=0 cpu=476.36ms elapsed=17012.00s tid=0x00007fe5ac027000 nid=0x28 waiting on condition  [0x00007fe5ab8f9000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2aa4680> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2aa4698> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc811', description='null'}-ltc-dev-shard-00-02-aw6le.mongodb.net:27017" #24 daemon prio=5 os_prio=0 cpu=498.41ms elapsed=17012.00s tid=0x00007fe59c016000 nid=0x29 waiting on condition  [0x00007fe5ab7f8000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2aa6710> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2aa6728> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"cluster-ClusterId{value='5e6b729d7c3e3e26f8fbc810', description='null'}-ltc-dev-shard-00-02-aw6le.mongodb.net:27017" #25 daemon prio=5 os_prio=0 cpu=501.76ms elapsed=17012.00s tid=0x00007fe5ac029000 nid=0x2a waiting on condition  [0x00007fe5ab6f7000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2aa7f48> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:229)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:210)
    at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157)
    - locked <0x00000000e2aa7f60> (a com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"async-channel-group-0-handler-executor" #27 daemon prio=5 os_prio=0 cpu=20835.30ms elapsed=17011.42s tid=0x00007fe5b4001000 nid=0x2b waiting on condition  [0x00007fe5aab90000]
   java.lang.Thread.State: WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2ae7858> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.6/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.6/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.LinkedBlockingQueue.take(java.base@11.0.6/LinkedBlockingQueue.java:433)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"async-channel-group-1-handler-executor" #26 daemon prio=5 os_prio=0 cpu=3912.81ms elapsed=17011.42s tid=0x00007fe5b000a000 nid=0x2c waiting on condition  [0x00007fe5aaa8f000]
   java.lang.Thread.State: WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2ae7ec0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.6/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.6/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.LinkedBlockingQueue.take(java.base@11.0.6/LinkedBlockingQueue.java:433)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"async-channel-group-0-timeout-thread" #28 daemon prio=5 os_prio=0 cpu=178.71ms elapsed=17010.62s tid=0x00007fe58c004800 nid=0x2d waiting on condition  [0x00007fe5aa3fe000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2ae7948> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:1182)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:899)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"async-channel-group-1-timeout-thread" #29 daemon prio=5 os_prio=0 cpu=181.28ms elapsed=17010.61s tid=0x00007fe59000d800 nid=0x2e waiting on condition  [0x00007fe5aa2fd000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e2ae7fb0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(java.base@11.0.6/AbstractQueuedSynchronizer.java:2123)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:1182)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:899)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"pulsar-client-io-1-1" #30 daemon prio=5 os_prio=0 cpu=4595.10ms elapsed=17008.22s tid=0x00007fe5c46c7000 nid=0x2f runnable  [0x00007fe5a97fe000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e337f8c8> (a org.apache.pulsar.shade.io.netty.channel.nio.SelectedSelectionKeySet)
    - locked <0x00000000e337f870> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:136)
    at org.apache.pulsar.shade.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
    at org.apache.pulsar.shade.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:824)
    at org.apache.pulsar.shade.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
    at org.apache.pulsar.shade.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
    at org.apache.pulsar.shade.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"pulsar-timer-4-1" #32 daemon prio=5 os_prio=0 cpu=94610.20ms elapsed=17007.12s tid=0x00007fe5780dd000 nid=0x30 waiting on condition  [0x00007fe5a84ce000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
    at java.lang.Thread.sleep(java.base@11.0.6/Native Method)
    at org.apache.pulsar.shade.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:577)
    at org.apache.pulsar.shade.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:476)
    at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"ktor-cio-dispatcher-worker-1" #41 daemon prio=5 os_prio=0 cpu=2287.04ms elapsed=17001.58s tid=0x00007fe5c4c99000 nid=0x39 waiting on condition  [0x00007fe5a81cd000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park(CoroutineScheduler.kt:783)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark(CoroutineScheduler.kt:728)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:711)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"ktor-cio-dispatcher-worker-2" #42 daemon prio=5 os_prio=0 cpu=2331.67ms elapsed=17001.58s tid=0x00007fe5c4c9a000 nid=0x3a waiting on condition  [0x00007fe571ffc000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park(CoroutineScheduler.kt:783)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark(CoroutineScheduler.kt:728)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:711)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"pool-1-thread-1" #43 prio=5 os_prio=0 cpu=13210.17ms elapsed=17001.50s tid=0x00007fe5c4c2f800 nid=0x3b waiting on condition  [0x00007fe571efb000]
   java.lang.Thread.State: WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e3650558> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.6/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.6/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.LinkedBlockingQueue.take(java.base@11.0.6/LinkedBlockingQueue.java:433)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"DefaultDispatcher-worker-4" #44 daemon prio=5 os_prio=0 cpu=1583.28ms elapsed=17001.50s tid=0x00007fe5c0010800 nid=0x3c runnable  [0x00007fe571dfa000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e3b37410> (a sun.nio.ch.Util$2)
    - locked <0x00000000e3b373b8> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:136)
    at io.ktor.network.selector.ActorSelectorManager.select(ActorSelectorManager.kt:97)
    at io.ktor.network.selector.ActorSelectorManager$select$1.invokeSuspend(ActorSelectorManager.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"DefaultDispatcher-worker-5" #45 daemon prio=5 os_prio=0 cpu=6764.31ms elapsed=17001.50s tid=0x00007fe5c0012000 nid=0x3d runnable  [0x00007fe571cf9000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e3b37510> (a sun.nio.ch.Util$2)
    - locked <0x00000000e3b374b8> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:136)
    at io.ktor.network.selector.ActorSelectorManager.select(ActorSelectorManager.kt:97)
    at io.ktor.network.selector.ActorSelectorManager$select$1.invokeSuspend(ActorSelectorManager.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"pool-1-thread-2" #46 prio=5 os_prio=0 cpu=2555.67ms elapsed=17001.42s tid=0x00007fe58807a000 nid=0x3e waiting on condition  [0x00007fe571bf8000]
   java.lang.Thread.State: WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e3650558> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.6/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.6/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.LinkedBlockingQueue.take(java.base@11.0.6/LinkedBlockingQueue.java:433)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"kotlinx.coroutines.DefaultExecutor" #47 daemon prio=5 os_prio=0 cpu=3683.72ms elapsed=17001.42s tid=0x00007fe54c005800 nid=0x3f waiting on condition  [0x00007fe571af7000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e3b38320> (a kotlinx.coroutines.DefaultExecutor)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:234)
    at kotlinx.coroutines.DefaultExecutor.run(DefaultExecutor.kt:83)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"DefaultDispatcher-worker-6" #48 daemon prio=5 os_prio=0 cpu=6233.02ms elapsed=17001.11s tid=0x00007fe5c4d10000 nid=0x40 runnable  [0x00007fe5717f6000]
   java.lang.Thread.State: RUNNABLE
    at sun.nio.ch.EPoll.wait(java.base@11.0.6/Native Method)
    at sun.nio.ch.EPollSelectorImpl.doSelect(java.base@11.0.6/EPollSelectorImpl.java:120)
    at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@11.0.6/SelectorImpl.java:124)
    - locked <0x00000000e3b37d80> (a sun.nio.ch.Util$2)
    - locked <0x00000000e3b37d28> (a sun.nio.ch.EPollSelectorImpl)
    at sun.nio.ch.SelectorImpl.select(java.base@11.0.6/SelectorImpl.java:136)
    at io.ktor.network.selector.ActorSelectorManager.select(ActorSelectorManager.kt:97)
    at io.ktor.network.selector.ActorSelectorManager$select$1.invokeSuspend(ActorSelectorManager.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"ktor-cio-dispatcher-worker-3" #49 daemon prio=5 os_prio=0 cpu=1366683.77ms elapsed=17000.92s tid=0x00007fe568022800 nid=0x41 runnable  [0x00007fe5716f4000]
   java.lang.Thread.State: RUNNABLE
    at io.ktor.network.sockets.CIOReaderKt$attachForReadingDirectImpl$1$1$1.invokeSuspend(CIOReader.kt:82)
    at io.ktor.network.sockets.CIOReaderKt$attachForReadingDirectImpl$1$1$1.invoke(CIOReader.kt)
    at io.ktor.network.util.UtilsKt.withSocketTimeout(Utils.kt:20)
    at io.ktor.network.sockets.CIOReaderKt$attachForReadingDirectImpl$1$1.invokeSuspend(CIOReader.kt:80)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:175)
    at kotlinx.coroutines.DispatchedTaskKt.resumeUnconfined(DispatchedTask.kt:137)
    at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:108)
    at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:306)
    at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:316)
    at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:248)
    at io.ktor.network.selector.SelectorManagerSupport.handleSelectedKey(SelectorManagerSupport.kt:84)
    at io.ktor.network.selector.SelectorManagerSupport.handleSelectedKeys(SelectorManagerSupport.kt:64)
    at io.ktor.network.selector.ActorSelectorManager.process(ActorSelectorManager.kt:73)
    at io.ktor.network.selector.ActorSelectorManager$process$1.invokeSuspend(ActorSelectorManager.kt)
    at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
    at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
    at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"ktor-cio-dispatcher-worker-4" #50 daemon prio=5 os_prio=0 cpu=2381.66ms elapsed=17000.42s tid=0x00007fe548009000 nid=0x42 waiting on condition  [0x00007fe5711f4000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.6/LockSupport.java:357)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park(CoroutineScheduler.kt:783)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark(CoroutineScheduler.kt:728)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:711)
    at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)

"pulsar-external-listener-3-1" #51 daemon prio=5 os_prio=0 cpu=48.52ms elapsed=11197.89s tid=0x00007fe57812f000 nid=0x43 waiting on condition  [0x00007fe5abefd000]
   java.lang.Thread.State: WAITING (parking)
    at jdk.internal.misc.Unsafe.park(java.base@11.0.6/Native Method)
    - parking to wait for  <0x00000000e33c3790> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(java.base@11.0.6/LockSupport.java:194)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11.0.6/AbstractQueuedSynchronizer.java:2081)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:1170)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(java.base@11.0.6/ScheduledThreadPoolExecutor.java:899)
    at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@11.0.6/ThreadPoolExecutor.java:1054)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.6/ThreadPoolExecutor.java:1114)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.6/ThreadPoolExecutor.java:628)
    at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(java.base@11.0.6/Thread.java:834)

"Attach Listener" #52 daemon prio=9 os_prio=0 cpu=0.34ms elapsed=0.10s tid=0x00007fe5ec001000 nid=0x72 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"VM Thread" os_prio=0 cpu=71257.50ms elapsed=17014.41s tid=0x00007fe61805d800 nid=0x12 runnable

"VM Periodic Task Thread" os_prio=0 cpu=4539.33ms elapsed=17014.32s tid=0x00007fe6180d8800 nid=0x1b waiting on condition

JNI global refs: 18, weak refs: 0
oleg-larshin commented 4 years ago

Please check the following ticket on YouTrack for follow-ups to this issue. GitHub issues will be closed in the coming weeks.

dimartiro-py commented 4 years ago

Any news about this issue? We are having the same problem in my company with a service that we have in production.

Could be related with #1018 ?

e5l commented 3 years ago

The bug was in ktor-client-apache and it should be fixed in Ktor 1.5.3