Open abhijitparkhi1983 opened 6 years ago
@abhijitparkhi1983 Could you share the repro project that contains this code? (clean it, zip it and attach it to this issue) Thanks
Sorry @jcouv , i wont be able to share a repro project because its a IP code. It will be helpful if I can get few pointers
We are encountering the same problem, large scale!
Looking at the stack trace, I noticed that the allocated memory is non-managed. Therefore wondering whether the GC is correctly aware of this memory pressure.
Looking at the source code: https://github.com/dotnet/runtime/blob/4f9ae42d861fcb4be2fcd5d3d55d5f227d30e723/src/libraries/System.Reflection.Metadata/src/System/Reflection/Internal/MemoryBlocks/NativeHeapMemoryBlock.cs
It looks like the allocation does not call GC.AddMemoryPressure()
. I believe that's why the call can run out of memory, even though there is plenty of memory left had the GC run.
@DustinCampbell ?
This is the same situation we encountered here:
Note that we are using Roslyn to compile autogenerated C# code inside our application.
System.OutOfMemoryException: Insufficient memory to continue the execution of the program.
at System.Runtime.InteropServices.Marshal.AllocHGlobal(IntPtr cb)
Full stack trace:
at DPFM.QTG.Config.ConfigApplication.TerminateOnNastyExceptionHandler(Object sender, FirstChanceExceptionEventArgs args)
at System.Runtime.InteropServices.Marshal.AllocHGlobal(IntPtr cb)
at System.Reflection.Internal.NativeHeapMemoryBlock.DisposableData..ctor(Int32 size)
at System.Reflection.Internal.StreamMemoryBlockProvider.ReadMemoryBlockNoLock(Stream stream, Boolean isFileStream, Int64 start, Int32 size)
at System.Reflection.PortableExecutable.PEReader..ctor(Stream peStream, PEStreamOptions options, Int32 size)
at Microsoft.CodeAnalysis.ModuleMetadata.CreateFromStream(Stream peStream, PEStreamOptions options)
at Microsoft.CodeAnalysis.MetadataReference.CreateFromFile(String path, MetadataReferenceProperties properties, DocumentationProvider documentation)
Thanks @sharwell. Your workaround worked for us!
I still think that the proper fix is for NativeHeapMemoryBlock
to call GC.AddMemoryPressure()
and GC.RemoveMemoryPressure()
.
https://docs.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=netcore-3.1
In fact, I cannot see any reason why one would ever call Marshal.AllocHGlobal()
and not call GC.AddMemoryPressure()
. To the point that one might wonder why AllocHGlobal()
doesn't do that by default.
Since the code for NativeHeapMemoryBlock
now lives in .NET Core Runtime, I have opened https://github.com/dotnet/runtime/issues/33812.
Further talk on the dotnet issue seems to imply that the problem is with Roslyn not exposing the Dispose methods when calling MeatadataReference.CreateFromStream
and relying on the finalizers instead, which is not reliable.
Instead I've followed the documentation here: https://docs.microsoft.com/en-us/dotnet/api/microsoft.codeanalysis.metadatareference.createfromstream?view=roslyn-dotnet
In particular:
The method eagerly reads the entire content of peStream into native heap. The native memory block is released when the resulting reference becomes unreachable and GC collects it. To decrease memory footprint of the reference and/or manage the lifetime deterministically use CreateFromStream(Stream, PEStreamOptions) to create an IDisposable metadata object and GetReference(DocumentationProvider, ImmutableArray
, Boolean, String, String) to get a reference to it.
For example (where AssemblyMetadataList
is a disposable collection):
private static MetadataReference MetadataReferenceCreateFromFile(AssemblyMetadataList list, string filePath)
{
//return MetadataReference.CreateFromFile(filePath);
var assemblyMetadata = AssemblyMetadata.CreateFromFile(filePath);
list.Add(assemblyMetadata);
return assemblyMetadata.GetReference(filePath: filePath);
}
We are getting similar issues with rising memory on a webapi, we are using CSharpCompilation.CompileScript
and Script.CreateDelegate
. I did see the problems with assemblies being created for each script, but using the new AssemblyLoadContext.Unload
solves that.
But testing memory usage by calling the endpoint repeatedly, it seems the GC is having a hard time. This is probabbly due to missing calls in roslyn to GC.AddMemoryPressure (topping out at 3gb there):
(gray: unmanaged, blue: Gen0, red: Gen1, green: Gen2)
I added GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true)
at the end of the roslyn section, and it "looks better":
But calling the GC like this is sketchy - it would be better if you could add the appropriate memory pressure to the GC.
I am going to trust the GC - it does eventually collect when the stars align, but I wonder how the GC would behave if it knew the memory was spiking between 0.5 and 5gb (unmanaged) rather than the 40->200mb (managed) it is currently seeing.
hitting same issue... once the modules are loaded to memory, there is no direct way to call Dispose on assembly metada... causing lots of unmanaged memory sitting for long....
Believe the solution here is for roslyn to use https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure corresponding to the amount of unmanaged memory, because it doesn't look like it wants to do a GC.
ok.. temporarily I used a dirty hack(don't recommend) to get the AssemblyMetaData from compilationunit and explicitly disposing it... hoping to see a better solution inside the the roslyn
@birojnayak you may want to check if you are holding any references to the assemblyloadcontext which will keep it alive when disposed - I had a list of assemblies it loaded in there which weren't disposed when the Unload was called, see here: https://github.com/dotnet/runtime/issues/44679
Version Used: 2.6.1
Steps to Reproduce:
Expected Behavior: The exception should not come and the project should open with workspace.OpenProjectAsync(projectPath).Result; code.
Actual Behavior: Insufficient memory to continue the execution of the program.