Azure / hpcpack

The repo to track public issues for Microsoft HPC Pack product.
MIT License
29 stars 11 forks source link

HPC SDK's Scheduler leaks connections even after disposal (beta SDK) #38

Closed mwelsh1118 closed 5 months ago

mwelsh1118 commented 6 months ago

Problem Description

In a server application, we connect to the HPC head node via the HPC SDK on-demand (so we open and close connections frequently). This worked fine in .NET 4.8 & HPC SDK 6.2.7756. After upgrading to .NET 8 & HPC SDK 6.3.8022-beta, we are seeing the number of active connections to the head node increase dramatically.

Steps to Reproduce

I replicated the problem via a unit test:

foreach (var i in Enumerable.Range(0, 100))
{
    var context = new StoreConnectionContext("headnode", CancellationToken.None);
    using var store = await SchedulerStore.ConnectAsync(context, CancellationToken.None, ConnectMethod.WCF).ConfigureAwait(false);
    await Task.Delay(100).ConfigureAwait(false);
}

Expected Results

The number of connections to the head node should remain constant, because we are connecting and then disposing the connection (via the using statement).

Actual Results

When running netstat on the head node, we see the number of open connections increase as the unit test runs. This same unit test does not cause increased connections when run against .NET 4.8 & the last production SDK release.

Additional Comments

In our application, we're using the Scheduler object instead of the SchedulerStore, but the behavior is the same (since the Scheduler is creating/disposing the SchedulerStore internally).

YutongSun commented 6 months ago

Thanks @mwelsh1118 . We are working on a fix for it.

YutongSun commented 6 months ago

@mwelsh1118 , we've fixed the port leak bug in .Net SDK and will release a new beta version in the coming week.

mwelsh1118 commented 5 months ago

Seems to be working now. Thanks!