Closed Aaronontheweb closed 4 years ago
If I run the reproduction sample with the following HOCON configuration:
akka{
actor.provider = remote
remote{
dot-netty.tcp{
enable-pooling = false
hostname = 127.0.0.1
port = 0
}
}
}
This disable the buffer pooling from running. If I use this setting, the memory leak doesn't occur:
Only 100mb allocated to the process - 99% of it doesn't linger by the time all of the ActorSystem
s are terminated.
So I scaled up my reproduction sample from 30 ActorSystems
to 70. The amount of memory allocated was pretty consistent with my earlier results - about 545mb allocated.
Based on the default rules used in the PooledByteBufferAllocator
On my 12 core development laptop (12 logical CPU) I should get back 24 16mb buffers, roughly ~400mb. That's what I see.
So the allocation size appears to be scaled to the hardware, but what's annoying is there's no way to de-allocate any of these buffers even when DotNetty is no longer running.
https://github.com/Azure/DotNetty/blob/9e3a84189f149fe3aa149851a1f083f2333ace7c/src/DotNetty.Common/FastThreadLocal.cs#L50 - I might be able to call this when the ActorSystem
gets destroyed, but unfortunately.... If there are multiple ActorSystem
s running inside the same process, this will have consequences.
So this doesn't appear to be a critical issue, after all. The memory is capped to the size of your hardware. If this is an issue, you can disable memory pooling via setting `akka.remote.dot-netty.tcp.enable-pooling = false'.
Version: v1.3.16 and all prior versions.
Reproduction: https://github.com/Aaronontheweb/DotNettyPoolAreaLeak - run this sample, using .NET Core 3.0 or .NET 4.6.1.
Expected behavior: all system-specific resources are released once the
ActorSystem
is terminated.Actual behavior: in this sample we spawn 31
ActorSystem
s and successively churn connections to the original targetActorSystem
. Even after allActorSystem
instances are terminated, there's still ~500mb of memory allocated - and as it turns out, the majority of this are leaked resources from previous DotNetty connections, specifically the byte buffer pooling:In most applications this won't be a problem, but it's a big issue for our test suite and for applications that make temporary use of
ActorSystem
s (i.e. thick client software.) We should change this behavior such that when DotNetty is terminated, so are all of its resources for thatActorSystem
.