Open yanns opened 2 years ago
Thanks for the detailed write up! Just repeating a couple notes from Discord for anyone reading the issue here:
SynchronizedMap
here is used for tracking fibers for the new "fiber dump" feature introduced in 3.3.0. So, the easiest way to remove this overhead is simply to disable diagnostics with -Dcats.effect.tracing.mode=off
argument to JVM. WorkStealingThreadPool
is far better optimized for these diagnostics. So if possible, using the WorkStealingThreadPool
as the shared ExecutionContext
for your application is better than using an external ExecutionContext
for the IORuntime
.1. The `SynchronizedMap` here is used for tracking fibers for the new "fiber dump" feature introduced in 3.3.0. So, the easiest way to remove this overhead is simply to disable diagnostics with [`-Dcats.effect.tracing.mode=off`](https://typelevel.org/cats-effect/docs/tracing#configuration) argument to JVM.
This is such a good feature - difficult decision to disable it.
2. Cats Effect's own `WorkStealingThreadPool` is far better optimized for these diagnostics. So if possible, using the `WorkStealingThreadPool` as the shared `ExecutionContext` for your application is better than using an external `ExecutionContext` for the `IORuntime`.
OK I see.
Our application is heavily based on Future
and only a small portion on it is using IO
to enable to progressive migration.
As a main execution context, mainly used to run Future
, I don't see that WorkStealingThreadPool
can work here.
scala.concurrent.impl.ExecutionContextImpl.DefaultThreadFactory
is able to detect scala.concurrent.blocking
code blocks to add new threads if necessary, a feature that we cannot live without.
And we also have some metrics based on the ForkJoinPool
, like getQueuedSubmissionCount
:
val ThreadPoolMetrics: Map[MetricName, ForkJoinPool => Long] = {
import ThreadPoolMetricsNames._
Map(
active -> (_.getActiveThreadCount),
poolSize -> (_.getPoolSize),
task -> (_.getActiveThreadCount),
completedTask -> (_.getRunningThreadCount),
queuedTasks -> (_.getQueuedSubmissionCount),
stealCount -> (_.getStealCount)
)
}
I could not find any possibility to get some metrics from WorkStealingThreadPool
.
At then end, either we keep this current execution context also for the IORuntime
with the threads contention.
Or we have to use two execution contexts: one for the Future
part, and one for the IO
part.
scala.concurrent.impl.ExecutionContextImpl.DefaultThreadFactory
is able to detectscala.concurrent.blocking
code blocks to add new threads if necessary, a feature that we cannot live without.
Good news, WorkStealingThreadPool
supports this 👍
And we also have some metrics based on the
ForkJoinPool
, likegetQueuedSubmissionCount
:
WorkStealingThreadPool
exposes similar metrics as MBeans, would that work for you? See the "Fiber Runtime Observability" in https://github.com/typelevel/cats-effect/releases/tag/v3.3.0
scala.concurrent.impl.ExecutionContextImpl.DefaultThreadFactory
is able to detectscala.concurrent.blocking
code blocks to add new threads if necessary, a feature that we cannot live without.Good news,
WorkStealingThreadPool
supports this 👍
I see now that cats.effect.unsafe.WorkerThread
extend scala.concurrent.BlockContext
👍
And we also have some metrics based on the
ForkJoinPool
, likegetQueuedSubmissionCount
:
WorkStealingThreadPool
exposes similar metrics as MBeans, would that work for you? See the "Fiber Runtime Observability" in https://github.com/typelevel/cats-effect/releases/tag/v3.3.0
Oh yes, this can be a way to achieve that. I'll have a look. Thanks!
From https://discord.com/channels/632277896739946517/632278585700384799/921743315169325087
In a project based on
Future
, we introduceIO
step by step. We useIO.unsafeToFuture()
for interoperability. I can observe the following locking:The whole application is running with one main
ExecutionContext
using aForkJoinPool
building very similarly toscala.concurrent.ExecutionContext.opportunistic
.We build our own
IORuntime
to re-use the mainExecutionContext
like this:This runtime can be instantiated many times. For the observed contention, it is instantiated once and re-used.