Closed MarkCiliaVincenti closed 1 year ago
Overall: what problem are we trying to solve? Opserver is generally monitoring a fairly finite number of things, and the lack of cleanup (so that we ensure a lock as often as possible) is by-design. Cache population is very often spikey/highly concurrent based on when user-define polling intervals for hierarchies of things are, where as the implementation being proposed here hard caps at "you can only poll 20 things at a time".
This isn't usable for Opserver. Some people are monitoring 5 things, some are monitoring tens of thousands of things. In short: the blocking collection would make us fall over very quickly :) I do appreciate the effort here, but I can't see switching to something like this for our use case.
Hi, the pooling size is not a limiter to concurrency. It's simply to reduce allocations with creating and disposing SemaphoreSlim instances. Should the pool find itself empty, it will create a new SemaphoreSlim instance rather than wait for one to be made available.
If some people have tens of thousands of things that means quite a lot of memory consumed and not cleaned up.
@MarkCiliaVincenti A single SemaphoreSlim is a very, very tiny piece of cache in play at those levels compared to everything about the thing they're monitoring which is also in memory. And we're going to allocate when we poll it again, as with everything in play here. There isn't a single thing that actually goes away indefinitely. I do not want to add the overhead, complication, or blocking this library change would bring. I'm sure it's great when the problems you're aiming to solve are in play, but that's not the behavior here. Again I do appreciate the thought, but it's not the right fit the Opserver.
This PR fixes an issue whereby ConcurrentDictionaries are being populated and never cleaned up.