Open jkotas opened 5 years ago
actively setting the hardlimit config isn't a main scenario - hardlimit's main usage occurs in the container scenario. and we are very close to end of 3.0 so realistically this will not make it. we can try to get this in for 3.1 though (if the policy allows it).
Repro:
using System;
using System.Buffers;
class Program
{
static void UsePool<T>(int blocks)
{
var pool = ArrayPool<T>.Shared;
T[][] a = new T[blocks][];
for (int size = 128; size <= 1024*1024; size *= 2)
{
for (int i = 0; i < a.Length; i++) (a[i] = pool.Rent(size)).AsSpan().Clear();
for (int i = 0; i < a.Length; i++) pool.Return(a[i]);
GC.Collect();
System.Threading.Thread.Sleep(10);
}
var info = GC.GetGCMemoryInfo();
Console.WriteLine($"Heap size: {info.HeapSizeBytes} Available Memory: {info.TotalAvailableMemoryBytes}");
}
static void Main(string[] args)
{
UsePool<byte>(40);
UsePool<sbyte>(40);
UsePool<char>(20);
UsePool<int>(10);
UsePool<uint>(10);
UsePool<long>(5);
UsePool<ulong>(5);
}
}
This program is keeping no more than ~40MB alive at any given point. Run it with COMPlus_GCHeapHardLimit=0x10000000
(ie 256MB). It will fail half-way through with OutOfMemoryException because of the ArrayPool caches are keeping too much memory alive.
CC @sergiy-k
Is this work going to be completed for 6.0.0? If not, we should adjust the milestone.
Interesting Issue. @jkotas I saw that you merged already a PR. What would be the further procedure now? Are you open to a community contribution here or does this change need a lot of GC experience?
I am not sure what PR you are talking about.
Fixing this likely requires a lot of experience. The first step is to propose a design for the fix (in English, no code).
I am not sure what PR you are talking about.
https://github.com/dotnet/coreclr/pull/25437
Fixing this likely requires a lot of experience. The first step is to propose a design for the fix (in English, no code).
Okay. Then this is the wrong issue for me :D
Isn’t this a big problem when container limits are specified?
Isn’t this a big problem when container limits are specified?
Is there any way to access the limits via the Docker API or somehow more directly via containerd?
Isn’t this a big problem when container limits are specified?
If your workload uses different ArrayPools a lot, you may need to reserve more memory for the container that what would be strictly required. Is this specific issue a big problem? I am not sure.
Is there any way to access the limits via the Docker API or somehow more directly via containerd?
The runtime reads the limits and configures the GC based on them today. The problem is not in reading the limits. The problem is in how the limits are applied across the system.
Putting this as future. This doesn't seem to be a trivial problem to resolve and likely needs to happen early in a release when it does happen.
From https://github.com/dotnet/coreclr/pull/25437#discussion_r299797776 :
Let's say I have 32GB physical memory with 16GB of free memory, set COMPlus_GCHeapHardLimit to 200MB, and start using array pool heavily. The array pool is going to see HighMemoryLoadThresholdBytes=~30GB and MemoryLoadBytes=~16GB, so it will not make any attempt to trim and it can cache way over 200MB