Closed PhilOkeeffe closed 10 months ago
@cshung @davidfowl @mangod9 create a runtimeconfig.template.json file in the root and build run it then it uses .net 6 gc.
put this inside { "configProperties": { "System.GC.Name": "clrgc.dll" } }
You will see it's using .net 6gc from the url http://localhost:47513/api/gcmode at the bottom of the page
This is exactly the .NET 6 experiment I did (I used environment variables instead of runtime config but it should be the same) and observed no difference on my end.
@cshung @davidfowl @mangod9 Just to be clear I observed the difference from .net 6 and 7 gc in production (not the sample app) over serveral hours demonstrated from 3 weeks ago. I wasn't activley looking for this comparison, it's just sometihng that stood out as we switched from 7 to 6. I wouldn't get to caught up in the difference between 6&7 as It's difficult to spot a pattern over a small period of time the main issue for us is they both gobble up memory quickly and don't release unlike 5. If you can get 6 & 7 or just 7 to release memory like .net 5 (pre upgrade to 7) then we are in a good place.
3 weeks ago image 2 weeks ago image
We are in the progress of investigating the issue. Here are some preliminary findings.
With respect to the locally-reproducible working set regression I found between 7 and 8, I have found out the reason why heap balancing works better on .NET 8. This is due to the change here.
In particular, we changed:
gc_heap::min_gen0_balance_delta = (dd_min_size (gen0_dd) >> 3);
to
gc_heap::min_gen0_balance_delta = (dd_min_size (gen0_dd) >> 6);
As the min_gen0_balance_delta
decreases, the allocator try harder to balance objects across heaps, that's why
The heaps are fuller by the time we perform the first GC (first alloc budget is the same, but old impl trip it faster because old heaps are less balanced so biggest heap reaches budget faster) => larger survived bytes during first GC (survival rate is the same, but fuller heap means more survived bytes) => larger budget during the first GC (next alloc budget is a function of survived bytes) => failed to stay within 2 * cache size => failed to cap the budget within a single cache size => decreased GC frequency => increased memory usage.
At this point, the root cause is known.
@cshung Again, the above is about .NET 5 - 7, not .NET 8. I'm glad that issue is root caused but there's another one that wasn't addressed it seems.
@davidfowl, indeed, I am aware of that. With what is given here - the net5-net7 regression is neither reproducible locally nor do we have any meaningful data to work with. There is not much I can do here. That's why I focused on the 7 - 8 regression, at least it is actionable.
@cshung @davidfowl @mangod9 If you need to move the sample application provided onto .Net 5 you could just set the Target framework! Are you close to resolving on .Net7 ?
I had the same problem when strees test WeatherForecastController
in default template (return 1000 items for quick mem growing). Managed memory looks good, but unmanaged memory keep increasing until the machine unresponsive.
I run the test in PopOS latest (based on Ubuntu 22.04 with dotnet-sdk-7.0 from Microsoft repo), within Rider 2023.2 and full memory profiler enabled.
In my production app, the behaviour is the same. After a lots of request to web api, total memory usage increase a lots and never reduce, until server crash.
In the graph, the memory stop growing only when the app not receiver any request, and keep growing when new requests coming.
Thank you for contacting us. Due to a lack of activity on this discussion issue we're closing it in an effort to keep our backlog clean. If you believe there is a concern related to the ASP.NET Core framework, which hasn't been addressed yet, please file a new issue.
This issue will be locked after 30 more days of inactivity. If you still wish to discuss this subject after then, please create a new issue!
Please do not close this issue.
Such shade @oferze, no need for snark, the issue can be re-opened.
From past experience, closed tickets were not re-opened. Appreciate your cooperation. Deleted my comment.
That happens for multiple reasons, I promise you we don't sit around and close issues for fun (even though we have a lot of them).
I think @cshung made some progress on .NET 7 but it's still unclear if this is unexpected behavior or not in Server GC mode.
Thank you for contacting us. Due to a lack of activity on this discussion issue we're closing it in an effort to keep our backlog clean. If you believe there is a concern related to the ASP.NET Core framework, which hasn't been addressed yet, please file a new issue.
This issue will be locked after 30 more days of inactivity. If you still wish to discuss this subject after then, please create a new issue!
We recently upgraded from .Net 5 - .Net 7 and straight away we could see the memory going up on all our servers and never releasing. Exactley the same as posted originally by @oferze https://github.com/dotnet/aspnetcore/issues/45098 I dont know why this post is closed it's not fixed in a version of VS as suggested (17.4.1.) I have 17.5.5. It's not related to visual studio it's .Net 7 for sure as we never had this problem on .net 5
I agree with @oferze https://github.com/oferze that this is a regression with .NET 7 since .NET 5 & 6 somehow does it better.
Our code hits the controller every 5 seconds but regardless of what our code does or how big the objects are that may or may not end up on the large heap or gen 1 - 2 the memory does not release on the worker process for w3wp.exe for a simple end point.
@oferze demonstrates this well with the code below but you will see the same even if you return an empty string and hit this endpoint continously.
Simply create an endpoint and constantly refresh (f5) once published or in debug mode and see the memory grow and not get released.