mgravell / Pipelines.Sockets.Unofficial

.NET managed sockets wrapper using the new "Pipelines" API
Other
412 stars 54 forks source link

Unnecessary _queue.Count check in DedicatedThreadPoolPipeScheduler.RunWorkLoop #66

Closed vonzshik closed 2 years ago

vonzshik commented 2 years ago

Specifically, if (_queue.Count == 0):

https://github.com/mgravell/Pipelines.Sockets.Unofficial/blob/65a5d5c94efced298cdbe7e7979467c33211592d/src/Pipelines.Sockets.Unofficial/DedicatedThreadPoolPipeScheduler.cs#L176-L187

The only way it can be true if the previous check (while (_queue.Count == 0)) exited due to _disposed = true. if (_queue.Count == 0) can replaced with changing if (_disposed) break; to if (_disposed) return;.

mgravell commented 2 years ago

seems valid, but I doubt it is going to be super impactful; feel free to submit a PR!

mgravell commented 2 years ago

hmmm; that does actually change the semantics, though; right now: it will drain the queue even if it is disposed; with this change, it would exit immediately when disposed, without draining - at least that's how I read it; now, a conversation could be had about which is more correct, but that's definitely a change

vonzshik commented 2 years ago

hmmm; that does actually change the semantics, though; right now: it will drain the queue even if it is disposed; with this change, it would exit immediately when disposed, without draining - at least that's how I read it;

It should work the exact same way.

  1. Both scheduling and draining are both done under a lock:

https://github.com/mgravell/Pipelines.Sockets.Unofficial/blob/65a5d5c94efced298cdbe7e7979467c33211592d/src/Pipelines.Sockets.Unofficial/DedicatedThreadPoolPipeScheduler.cs#L119-L136

https://github.com/mgravell/Pipelines.Sockets.Unofficial/blob/65a5d5c94efced298cdbe7e7979467c33211592d/src/Pipelines.Sockets.Unofficial/DedicatedThreadPoolPipeScheduler.cs#L166-L176

  1. The only way to get out of the while loop is to either: a) have _queue.Count > 0 b) _disposed = true

https://github.com/mgravell/Pipelines.Sockets.Unofficial/blob/65a5d5c94efced298cdbe7e7979467c33211592d/src/Pipelines.Sockets.Unofficial/DedicatedThreadPoolPipeScheduler.cs#L176-L182

In case of _queue.Count > 0, the queue is going to be drained for as long as it's true. In case of _disposed = true (which is only checked if _queue.Count = 0), the queue is already drained and no one is going to add new jobs into the queue (since it's done under the same lock, and it also checks for the _disposed = true).

mgravell commented 2 years ago

The whole point of the Monitor.Wait is to surrender the lock, with the expectation that things will be added; as such, it is entirely possible that we could enter the while loop (with the queue empty), Wait, have an item added, and exit the while loop; with the current code, we would then proceed to Dequeue and Execute the item that got added regardless of when dispose happens, and we would only exit the code on the event that it is disposed and empty.

With the change: this no longer remains the case; in that same scenario, we would not process any final items.

What I perhaps could get behind is: reversing the order of the tests; for example:

if (_disposed && _queue.Count == 0) break; // or return, either would work

This would probably have the effect you're after (removing the _queue.Count check each iteration), but would not change the semantics.

mgravell commented 2 years ago

oh! right; ignore that; I thought you were talking about checking _disposed after the loop; let me re-read; that may indeed be fine

mgravell commented 2 years ago

apologies; my comment above related to the original text in the top post:

if (_queue.Count == 0) can replaced with changing if (_disposed) break; to if (_disposed) return;

which is talking about the state after the loop; I see that the PR changes this to inside the loop, and importantly before the .Wait() - this simplifies things greatly

vonzshik commented 2 years ago

apologies; my comment above related to the original text in the top post:

Ah, my apologies then: I should have been more specific with the exact change I'm proposing 😄