Closed yyjdelete closed 3 years ago
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label.
In the source code of TaskCompletionSource
and Task
, I've found only 1 handle creation (for ManualResetEvent
), which is only used for legacy IAsyncResult
interface, not await
.
Are the handles related to thread pool? If so, changing thread pool implementation will get different behavior for them.
Task.Delay
uses TimerQueueTimer
. Since it's an internal implement detail and may changed since .NET Framework, it can have different behavior.
Yes. After a brief comparison between source.dot.net and referencesource.microsoft.com :
.NET Framework implementation of
.NET Core implementation calls Task.Delay
creates a native handle every time, and dispose it right after completion.TimerQueue
, which interacts with thread pool. You may be observing handles used by thread pool. I don't see any handle creation direct inside Delay
and TimerQueue
.
I don't see waithandle creation in both of them.
This should be the really related thing. It says that no handle should be created at all.
Tagging subscribers to this area: @tarekgh See info in area-owners.md if you want to be subscribed.
Author: | yyjdelete |
---|---|
Assignees: | - |
Labels: | `area-System.Threading.Tasks`, `untriaged` |
Milestone: | - |
I don't see the cited behavior. I copy/pasted your code into a 64-bit .NET 5 app, ran it on Windows 10, and it's holding steady at ~180 handles per Task Manager.
~Sorry for some mistake, the behavior is the same between netfx and netcore~, and may not related to System.Threading.Tasks
(see the below code with only Thread and Monitor, Thread.Sleep can be removed to make it run faster)
Maybe it does have some difference, the original case, while (true) Task.Delay(10).Wait(); never(or slow) increase on net48 for me, but it increase on net5.0. And both increase for the monitor version.
Seems it always need an GC to free handles, but on net48, the max count seems much more smaller.(up to 1 to 3k in net48, and 30 to 50k in net5.0 on the same PC, and then it will be collected and increase from 1xx again) Is the GC logic in netcore be less positive to free objects and will keep them for longer time?
@stephentoub
I see the same behavior as yours when try to reproduce it on another PC(Intel 4790k, 4 phy cores with HT, 8 logic cores) instead of the first one(Intel 4590, 4 phy cores without HT, 4 logic cores).
Maybe you can try to change ProcessorAffinity
(the below code) to 1 or 3(0x0F not work for me on the PC with 8 logic cores, and 3 works for both PC), and check if it works?
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Runtime;
using System.Threading;
using System.Threading.Tasks;
namespace TestAnyThing
{
class Program
{
static volatile object sObj = new object();
static void Main()
{
using (var proc = Process.GetCurrentProcess())
{
Console.WriteLine(proc.ProcessorAffinity.ToString("X8"));
Console.WriteLine(GCSettings.IsServerGC);//False
Console.WriteLine(GCSettings.LatencyMode);//Interactive
if (Environment.ProcessorCount >= 2)
proc.ProcessorAffinity = (IntPtr)0x03;
}
new Thread(() =>
{
while (true)
{
var oldObj = sObj;
if (oldObj != null)
{
lock (oldObj)
{
Monitor.PulseAll(oldObj);
}
oldObj = null;
}
Thread.Sleep(10);
}
}).Start();
int i = 0;
var sw = Stopwatch.StartNew();
while (true)
{
//Task.Delay(10).Wait();
var obj = new object();
sObj = obj;
lock(obj)
{
Monitor.Wait(obj);
}
sObj = obj = null;
++i;
if ((i & 0x3F) == 0)
{
//GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, blocking: true);
using (var proc = Process.GetCurrentProcess())
{
//net35: 4xx??, net48:1k~3k, net5.0:30~50k
Console.WriteLine($"{sw.ElapsedMilliseconds} ms used, {proc.HandleCount} handlers active.");
sw.Reset();
sw.Start();
}
}
}
}
}
}
Is the GC logic in netcore be less positive to free objects and will keep them for longer time?
There are plenty of improvements and tweaks that have gone into the GC, and that can include changes in budgets and what causes a GC to be invoked. Other changes in the stack can also influence what's being allocated and thus pressure on the GC. Net effect is GC timings are not constant across releases.
the below code with only Thread and Monitor
Yes, every object you wait on will end up with its own sync block and associated event, which won't be reclaimable until that object is no longer referenced and can be collected. While there's complicated processes involved to determine what actual handle to use and how to perform the wait, at the end of the day waiting on a managed object results in waiting on a handle:
I think the question has been answered so I'll close this. Please feel free to re-open if there's still an issue.
Description
Execute the below program in net5.0/netcoreapp3.1/2.1.
Open taskmgr,procexp, processhack(
show unamed handlers=true
) or any other tools to monitor the HANDLEs number used by the program. And see it(type=event) keeps grow about 20/s.It's not an leak, since the HANDLEs will still be freed after GC, but may long time latter if no other alloc happens.
Configuration
Regression?
Maybe, since it not happen with net48.No, it always need an GC to free handles, but on net48, the max count seems smaller.(1 to 3k in net48, and 30 to 50k in net5.0 on the same PC)Other information
I know sync block an task looks strange, but it's used by some old library to simulate Thread.Sleep in netstandard1.x before the api is avaliable. https://github.com/Azure/DotNetty/blob/dev/src/DotNetty.Common/Concurrency/XThread.cs#L93