samsondav / rihanna

Rihanna is a high performance postgres-backed job queue for Elixir
MIT License
439 stars 49 forks source link

Clusterwide lock mode #47

Open samsondav opened 5 years ago

samsondav commented 5 years ago

See https://github.com/samphilipd/rihanna/issues/46 for discussion.

This would bring enhanced performance when run on a single Erlang cluster.

lpil commented 5 years ago

This architecture would need to perform some kind of durable state management that can handle nodes and workers going down, including the master. I'm sceptical that this would be orders of magnitude faster than the same implemented in Postgres though I've no data to back up my feeling.

How do you intend the state management and failure detection to work with this design?

samsondav commented 5 years ago

Nope, no durable state management is required. If a node goes down, we receive a down message to the dispatcher and simply retry the job on a new node.

lpil commented 5 years ago

What happens when the global lock process goes down?

What happens when a node is isolated from the global lock process by a network partition?

samsondav commented 5 years ago

First case scenario:

A new singleton will be booted which will re-acquire the global lock and start reading jobs again. Some jobs may be executed twice.

Second case scenario:

Erlang's built-in monitoring will realise the network partition, interpret that node as down and assume none of the jobs it was running have been executed. The global lock process will re-dispatch these jobs to a node that is alive. Some jobs may be executed twice.

lpil commented 5 years ago

What happens to the workers? All killed by the exit from the global lock process? In cloud environments network partitions are common (Erlang was designed for more reliable networks) so this may cause some disruption. I'm not sure how fast global links are, would be cool to test this.

In the network partition situation if we're using global processes we'll end up with at least two nodes running the global lock process. Would this be safe? If we're still running the same SQL query to the database it would be but I'm unsure if that was the intention.

All sounds fun so far :) I'd suggest that (at some point) it'd be worth doing some preliminary benchmarking so we can get a better understanding.

samsondav commented 5 years ago

Workers on the partitioned node may continue to run, which is why some jobs may execute twice. There was always a conscious design choice in Rihanna to guarantee at-least-once execution, hence this failure mode.

We will never have two nodes running the global lock process, because postgres will only ever grant the advisory lock once. In the event of a netsplit and two master nodes occurring, one of them will fail to take the lock and simply do nothing.

lpil commented 5 years ago

It's the same guarantee, but the likelihood of multiple delivery would increases substantially, one to document well.

We will never have two nodes running the global lock process, because postgres will only ever grant the advisory lock once. In the event of a netsplit and two master nodes occurring, one of them will fail to take the lock and simply do nothing.

Would there be an additional database lock then?

If that's the case we wouldn't even need the same iterative query. I feel like there wouldn't actually be that much code shared with the current Rihanna.

samsondav commented 5 years ago

Yes, the probability of multiple executions will be slightly higher, but not unmanageably so. Unexpected netsplits and/or node deaths are not that common, especially if a graceful exit with job draining is implemented. I think this is probably an unavoidable cost of increased throughput.

As for the lock, I'm not sure you have fully understood my original proposal. In this new scenario, there will be one and exactly one advisory lock taken by one global dispatcher. Workers will not be required to take any locks at all.

It will not need the same query, it can work with a simple lockless SELECT LIMIT which would be hyper fast. There will be some code shared, around enqueuing and deleting jobs. I imagine it to be implemented as a separate dispatcher module, so the user can choose which one they boot in their supervision tree.

e.g. MultiDispatcher or SingletonDispatcher

lpil commented 5 years ago

especially if a graceful exit with job draining is implemented.

I think that in the event of the dispatcher/lock death we want to brutally kill workers rather than killing them gracefully- otherwise multiple delivery is guaranteed.

As for the lock, I'm not sure you have fully understood my original proposal. In this new scenario, there will be one and exactly one advisory lock taken by one global dispatcher. Workers will not be required to take any locks at all.

I see, much clearer now :)