Open madelson opened 4 years ago
Seems sensible. To go one step further, before pushing to the distributed lock layer, it might also be valuable to add a SystemDistributedLock layer in front of that again - depending on circumstance.
Question: is it a desire of this package to provide any kind of common IDistributedLock
abstraction that all concrete providers will implement at a minimum? Or will it just provide different concrete lock implementations with no common interface, and leave it for application developers to build there own common abstractions around them if needed? If it's the latter then I can quickly see myself wanting to create my own common IDistributedLock
abstraction, and some form of CompositeDistributedLock
that implements the same interface but can wrap an inner, ordered list of lock providers (in memory, system, sql etc) that it must Aquire() a lock from first, before proceeding to the next one in the list - ultimately either timing out waiting for a lock at each level, or returning a final CompositeDisposable which wraps each of the IDisposable locks obtained.
Question: is it a desire of this package to provide any kind of common IDistributedLock abstraction that all concrete providers will implement at a minimum?
This is something I'm honestly a bit unsure about and definitely eager to hear feedback. In my use-cases, distributed locks aren't easily substituted because the semantics are just different (for System vs. SQL, for example). On the other hand, I could see a world where someone wanted to plug in a different implementation for deploying to a cloud environment vs. local.
Yeah I think its valuable, when debugging locally I like dependencies minimal so using in memory locks is perfect. When deployed to staging and production we'd want a proper distributed lock using sql or azure blob leases etc. The only way to do that without horrible branching all over the place in the code is to create a common abstraction over the two, and make the concrete implementation actually used configurable or based on DI etc.
I think one basic scenario that many people might look for is basically a replacement / substitute to this code, that makes it global:
lock(_lock)
{
// blah
}
The next might be variations of Monitor.Enter
I.e so you can do the same as above but supply timeouts etc. I'd hope there would be a way to achieve that basic level of lock acquisition accross all distributed lock implementations but I havent looked closely. If there is, then that could be the basis of common interface between them.
我认为许多人可能会寻找的一个基本场景基本上是该代码的替换/替代,这使其成为全球性的:
lock(_lock) { // blah }
下一个可能是
Monitor.Enter
Ie 的变体,因此您可以执行与上述相同的操作,但提供超时等。我希望有一种方法可以在所有分布式锁实现中实现锁获取的基本级别,但我没有仔细研究。如果有,那么这可能是它们之间通用接口的基础。
any progress? To be honest,I need a InMemoryDistributedLock
Hi @darkflame0 I am not actively working on this feature, just considering it. Can you help me understand your use-case? There are a few different ones I've considered:
SqlDistributedLock
, that's 100 database connections. However, if each thread first waited for its local in memory lock and only could claim the remote lock after acquiring that, then we'd only need 10 database connections).What are you trying to do?
@madelson
- In-memory named lock implementation where the named locks simplify coordination between different parts of the same application
- In-memory named lock implementation to stand in for a truly-distributed version while testing
Mainly because of these two points,I need a infrastructure,it's in memory at the beginning,but can be distributed in the future.
@darkflame0 have you considered using FileDistributedLock
or (if you are on windows) EventWaitHandleDistributedLock
? These are both light-weight single-machine options.
@madelson Maybe FileDistributedLock is a proper alternative,I will try it. thanks.
@madelson I find out FileDistributedLock not support ReaderWriterLock and Semaphore ...
and EventWaitHandleDistributedLock not support ReaderWriterLock.
They are all Incomplete.
@darkflame0 yeah different technologies offer different capabilities; I only implement when I think I can offer something robust and performant on top of the particular technology.
Looking into it, I think we can build a reasonable reader-writer implementation with wait handles based on this technique. Would that be useful to you?
It's useful me.I'm on Windows.
but wait handles is unavailable if on linux. I still think should have a InMemory Implementation. It is beneficial to both test and single application
@darkflame0 I'm working on some prototypes for these. If I put out a prerelease version would you be interested in trying it out?
@darkflame0 ok prerelease is out (https://www.nuget.org/packages/DistributedLock.ProcessScoped/1.0.0-alpha01). Let me know if you get a chance to give it a try. This doesn't support composite locking yet, just the process-scoped named lock types you were asking about.
Right now, if the same process tries to claim a lock, we push this out to the distributed locking layer (e. g. SQL).
We could reduce resource usage by first checking an internal lock (e. g. a
SemaphoreSlim
). We have to be very careful about how this interops with modes, though.Current thinking on this: rather than build this into multi-plexing, we could offer a wrapper lock for any IDistributedLock that would add an in-process synchronization layer.
Some thoughts on composite locks:
This should be more complex than just take lock a, take lock b. For example, let's say we have N local waiters for a the lock and then the first one of those acquires the distributed lock. We shouldn't release that until all those N local waiters have gotten to hold the lock or given up (new local waiters that come in after we got the distributed lock shouldn't get to join in; otherwise we might hog the distributed lock indefinitely). The benefit of this system is that it prevents a service using composite locking from always losing out to a service that doesn't, and also decreases the number of distributed lock operations (more efficient). The downside is that it reduces fine-grained interleaving of lock requests between services. Note that for R/W locks where we have both Write and UpgradeableRead, we have to be careful that the underlying lock is the right type.
We need to be careful with R/W locks if they have writer-jumps-reader behavior. We wouldn't want a scenario where writers are queued up behind a local read lock hold which is then queued up waiting for a distributed write lock hold to be released. We might want to only use the local lock for writes and upgradeable reads in order to prevent this.
We need to be careful with upgradeable reads. If we are holding an upgradeable read lock and try to upgrade, we might succeed locally but fail remotely, leaving ourselves with no way to back out of the local upgrade. To solve this, we can have the upgrade operation be remote-only.