Open seanamorosoamtote opened 1 week ago
Tagging subscribers to this area: @mangod9 See info in area-owners.md if you want to be subscribed.
To check if it's related to the new GC in 8.0, can you try with DOTNET_GCName=libclrgc.so
to determine if with that usage stays closer to 6?
Description
Issue is very similar to https://github.com/dotnet/runtime/issues/95922 however we have upgraded all the way to the current release of 8.0 as of 11/18/2024 which is
8.0.10
. Noticed that we started gettingOOMKilled
on our deployments withlinux/amd64
using the base image ofmcr.microsoft.com/dotnet/sdk:8.0.11
. We had defaulted our memory limits to512Mi
which was sufficient for these services in dotnet 6. Now we have to update to the memory limit to1Gi
which seems to be sufficient but still can't understand where the waste is going.Analysis seems to show that a larger deal of unmanaged memory is being reserved and not garbage collected or freed.
Configuration
Deployed in Google Cloud Kubernetes using base image of
mcr.microsoft.com/dotnet/sdk:8.0.11
Regression?
Maybe? If nothing else, our application requires more base memory than it did in dotnet 6 and being able to understand why would be helpful.
Data
Prior to analyzing we would deploy with
512Mi
and once our application would start the integration test phase where it would be "live" it would then get OOMKilled. Again, putting it up to1Gi
seems to resolve the issue but it seems odd that we would have to increase this just for dotnet 8.Heaptrack analysis (notice the peak RSS is 1.1GB):
There is some leakage here but nothing that seems to point to 1GB:
Having an issue attaching the heaptrack log itself. I'll try after this is created to do so again.
WinDbg shows a large amount in the
PAGE_READWRITE
In WinDbg, I also looked at the largest items on the heap so this seems to point to more of a native memory issue?