dotnet / aspnetcore

ASP.NET Core is a cross-platform .NET framework for building modern cloud-based web applications on Windows, Mac, or Linux.
https://asp.net
MIT License
35.44k stars 10.01k forks source link

Memory grows with each request, PinnedBlockMemoryPool never release memory #55490

Open depler opened 6 months ago

depler commented 6 months ago

Is there an existing issue for this?

Describe the bug

Here is code sample:

[Route("api/[controller]")]
[ApiController]
public class TestsController : ControllerBase
{
    static readonly byte[] Data = new byte[1_000_000_000]; //1gb

    [HttpGet(nameof(Download))]
    public ActionResult<byte[]> Download()
    {
        GC.Collect();

        var memory = System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize64;
        Console.WriteLine($"{memory / 1_000_000} MB");

        return Data;
    }
}

I am calling this method via curl as following 10 times:

curl --insecure https://localhost:5000/api/Tests/Download > NUL

And here is output of the sample:

67 MB
5216 MB
6626 MB
8014 MB
9400 MB
9464 MB
12195 MB
13592 MB
14975 MB
16356 MB

So memory grows over and over, despite the fact that Data is static variable and GC.Collect() called each time. Please someone explain what is going on? And how to reduce memory consumption?

Expected Behavior

Much less memory consumption

Steps To Reproduce

No response

Exceptions (if any)

No response

.NET Version

8.0.204

Anything else?

No response

martincostello commented 6 months ago

Can you still repro with this change? Process implements IDisposable and you aren't disposing of it so there's a possibility you're just observing native resources associated with that process handle leaking.

- var memory = System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize64;
+ long memory;
+ using (var process = System.Diagnostics.Process.GetCurrentProcess())
+ {
+     memory = process.PrivateMemorySize64;
+ }
teoadal commented 6 months ago

I've running it in docker on mcr.microsoft.com/dotnet/aspnet:8.0. Looks like ok, I can't reproduce it.

Processor count: 2
Available memory: 6217 MB

68 MB
2102 MB
3136 MB
3138 MB
3123 MB
3104 MB
3127 MB

Code:

using System.Diagnostics;
using System.Runtime.CompilerServices;

var data = new byte[1_000_000_000];

var app = WebApplication
    .CreateSlimBuilder(args)
    .Build();

Console.WriteLine($"Processor count: {Environment.ProcessorCount}");
Console.WriteLine($"Available memory: {GetAvailableMemory()}");

var downloadApi = app.MapGroup("/download");
downloadApi.MapGet("/", () =>
{
    GC.Collect();
    Console.WriteLine(GetCurrentMemory());
    return Results.File(data, "application/pdf", "test.pdf");
});

await app.RunAsync();

return;

static string GetAvailableMemory()
{
    var memoryInfo = GC.GetGCMemoryInfo();
    return $"{memoryInfo.TotalAvailableMemoryBytes / 1_000_000} MB";
}

[MethodImpl(MethodImplOptions.NoInlining)]
static string GetCurrentMemory()
{
    using var process = Process.GetCurrentProcess();
    return $"{process.WorkingSet64 / 1_000_000} MB";
}

So, I think:

  1. You can run your example in a container to eliminate side effects.
  2. You can read about server garbage collection to understand why memory traffic depends on CPU and available memory.
  3. Make sure that you download only one file in your test at a time. If you will download in parallel the situation will change. You can use ConcurrencyLimiter in your application's configuration to make this rule clear.
depler commented 6 months ago

Can you still repro with this change? Process implements IDisposable and you aren't disposing of it so there's a possibility you're just observing native resources associated with that process handle leaking.

- var memory = System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize64;
+ long memory;
+ using (var process = System.Diagnostics.Process.GetCurrentProcess())
+ {
+     memory = process.PrivateMemorySize64;
+ }

Yes, I still can reproduce it. There is probably some leak because of missing Dispose, but not gigabytes.

depler commented 6 months ago

I've running it in docker on mcr.microsoft.com/dotnet/aspnet:8.0. Looks like ok, I can't reproduce it.

Processor count: 2
Available memory: 6217 MB

68 MB
2102 MB
3136 MB
3138 MB
3123 MB
3104 MB
3127 MB

Code:

using System.Diagnostics;
using System.Runtime.CompilerServices;

var data = new byte[1_000_000_000];

var app = WebApplication
    .CreateSlimBuilder(args)
    .Build();

Console.WriteLine($"Processor count: {Environment.ProcessorCount}");
Console.WriteLine($"Available memory: {GetAvailableMemory()}");

var downloadApi = app.MapGroup("/download");
downloadApi.MapGet("/", () =>
{
    GC.Collect();
    Console.WriteLine(GetCurrentMemory());
    return Results.File(data, "application/pdf", "test.pdf");
});

await app.RunAsync();

return;

static string GetAvailableMemory()
{
    var memoryInfo = GC.GetGCMemoryInfo();
    return $"{memoryInfo.TotalAvailableMemoryBytes / 1_000_000} MB";
}

[MethodImpl(MethodImplOptions.NoInlining)]
static string GetCurrentMemory()
{
    using var process = Process.GetCurrentProcess();
    return $"{process.WorkingSet64 / 1_000_000} MB";
}

So, I think:

  1. You can run your example in a container to eliminate side effects.
  2. You can read about server garbage collection to understand why memory traffic depends on CPU and available memory.
  3. Make sure that you download only one file in your test at a time. If you will download in parallel the situation will change. You can use ConcurrencyLimiter in your application's configuration to make this rule clear.

return Results.File(data, "application/pdf", "test.pdf"); grows up to half of availiable memory.

If you replace it with return data - it will grow up to full memory. Different GC settings does not change the behaviour.

martincostello commented 6 months ago

There's a number of issues open in dotnet/runtime related to increased memory consumption with containers in .NET 8:

Do any of those correlate with what you're seeing in terms of the runtime environment you use and the work your application typically does?

depler commented 6 months ago

So, memory is growing up dramatically because of this: https://github.com/dotnet/aspnetcore/blob/8486d31e24f30e3fa1809a95699a0adc16f448d7/src/Shared/Buffers.MemoryPool/PinnedBlockMemoryPool.cs#L35

And this: https://github.com/dotnet/aspnetcore/blob/v8.0.4/src/Shared/Buffers.MemoryPool/MemoryPoolBlock.cs

You see? Starting from some response length (megabytes and more) Kestrel uses memory pool to store this response. The problem is that PinnedBlockMemoryPool never release memory - it can only grows up. As far as I know there is no way to disable this behavior, and Microsoft has been promising for 3 years to fix this 🙁

depler commented 6 months ago

I've made code sample to demonstrate the problem and way to reduce memory consumption: https://github.com/depler/DotnetMemoryTest

It uses Harmony (https://github.com/pardeike/Harmony) to override original logic of PinnedBlockMemoryPool. See method MemoryPatch.Install();

Test steps are following:

  1. Run DotnetMemoryTest, wait until Tracking memory text in console
  2. Run download_loop.bat file for about 1 minute, then kill it
  3. Wait for about 1 more minute to let GC work and stabilize memory amount

Memory consumption without patch: image

Memory consumption with patch: image

So this problem is definitely related with incorrect implementation of PinnedBlockMemoryPool. It is not memory leak by nature, but it leads to accumulation huge amount of almost-not-used byte arrays forever, while process alive. At the very bad scenario - it will lead to OutOfMemoryException during heavy load.

davidfowl commented 5 months ago

While this is a problem, the easy way to avoid this is to write to the response in chunks instead of allocating large buffers per request. Though the pinned block memory pool has issues, allocating large buffers on the large object heap per request will result in similar issues.

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/best-practices?view=aspnetcore-8.0#minimize-large-object-allocations