Open dprez8 opened 3 weeks ago
Hello,
This is tricky because first you will ask the mutex if the key is locked, then the mutex will reply "no", but before you will attempt to lock another process could take the lock in between, so that is not something I would rely on.
I think you can just use the lock/2
function like this:
Task.start(fn ->
case Mutex.lock({:global, MC.Mutex}, key) do
{:ok, lock} ->
long_lasting_operation(sleep_time)
Mutex.release({:global, MC.Mutex}, lock)
:ok
{:error, :busy} ->
:ok
end
end)
The downside is that it will most of the time spawn a task that will immediately terminate but if you do not have to copy a huge state into the spawned process it's fine.
I guess I should have a way to give a lock away, something like that:
case Mutex.lock({:global, MC.Mutex}, key) do
{:ok, lock} ->
{:ok, pid} =
Task.start(fn ->
{:ok, _} = Mutex.inherit({:global, MC.Mutex}, lock)
long_lasting_operation(sleep_time)
:ok = Mutex.release({:global, MC.Mutex}, lock)
end)
:ok = Mutex.give_away({:global, MC.Mutex}, lock, pid)
{:error, :busy} ->
:ok
end
As a side note, I am not sure you should use a mutex with several nodes. If you have a netsplit or are deploying nodes in a rolling fashion, you may have duplicate mutexes registered as {:global, name}
for a couple seconds or more, leading to inconsistencies.
@lud It would be a good option if a lock could be inherited. That way, you could prevent two processes from entering the if statement simultaneously. However, I still believe it would be helpful to know the state of the GenServer and which keys are locked at any given moment to make more informed decisions.
Regarding the use of a mutex across several nodes, if a netsplit occurs and the nodes reconnect, one of the GenServers would terminate due to a naming conflict, and only one would remain. In my specific case, this wouldn't have a significant impact, as I only need to insert data into a Nebulex cache and it could be done by any node, but not two connected nodes at the same time. since before inserting the data in the cache, it deletes all previous keys
Hey! Sorry but I am not going to implement a function to list locked keys as it would suggest ways of dealing with the mutex that I consider invalid. There is an inherent concurrency problem here, the list of keys you would have can be different than what is on the mutex at the moment you get it. This is not suitable for a library that helps dealing with concurrency issues in a concurrent language :)
Now if you really want that feature you can call this: Map.keys(:sys.get_state({:global, MC.Mutex}).locks)
, it will return the list of currently locked keys at the moment the mutex receives the call.
If what you need is to register processes that are working, and want to know if there is a process for a given key, you may use a Registry instead of a mutex, it could be more suitable.
Or horde/swarm if you want to have that on several nodes.
I'm going to implement the handover mechanism, it looks like fun!
Well @lud I think Map.keys(:sys.get_state({:global, MC.Mutex}).locks)
could be helpful, thanks for your response and keeping me on track!
Hi! I was dealing with an issue using the library:
Let's supose this scenario
If I make an http request using Phoenix and Cowboy and the process holds a lock in a key, if I then cancel the request the process dies and the lock is freed. If use Task.start instead, and this process is the one to hold the key, it doesn't die if I cancel the request, but incomming procesess will be waiting then to execute the same function inside the mutex (Let's supose I only want to execute it once, with the first arriving process).
I would like to know the state of the mutex, and if a key is being locked to prevent executing the long_lasting_operation().
I want to do something like this
The await function just keeps waiting until the key is freed without a sync notification.
I would add this functions to the mut.ex file:
Thanks for reading!, I'll be attentive.