smpallen99 / coherence

Coherence is a full featured, configurable authentication system for Phoenix
MIT License
1.27k stars 225 forks source link

Clustered Credential Store #288

Open jesseshieh opened 7 years ago

jesseshieh commented 7 years ago

Hi! Thanks for an awesome library. I just wanted to make a feature suggestion.

I think it's great that there is the in-memory CredentialStore Server and the database persistence option, but what I would really love to have is an in-memory CredentialStore Server that works across a cluster of nodes.

Right now, it looks like each node gets it's own CredentialStore.Server and the session data within each node is not shared with all the other nodes. This means that an ajax request that needs authentication will succeed or fail depending on which node it randomly hits. Database persistence solves this problem, but as you know, has tradeoffs.

I think, the gist of it is to change

GenServer.call @name, {:get_user_data, credentials}

to something like

GenServer.call {@name, node}, {:get_user_data, credentials}

where node is decided with some sort of node partitioning like consistent hashing or even just a singleton (master node) or something.

Anyway, that would be very useful to me! I'll use database persistence for now.

smpallen99 commented 7 years ago

Great suggestion. I'm open to a PR. Otherwise, perhaps someone from the community can help out.

jesseshieh commented 7 years ago

I'll take a stab at this, but I'm not sure exactly when I'll get to it. My plan is to use libring as a dependency. And do something like

node = HashRing.Managed.key_to_node(:myring, credentials)
GenServer.call {@name, node}, {:get_user_data, credentials}

Which should still work fine in the single node case.

smpallen99 commented 7 years ago

I prefer not to add additional dependencies, especially for a feature that will not be used for many users.

I think the simplest solution would be to dedicate one node for the master store, and RPC to it from the other nodes on each authentication request. You could use config to define the master. However, this would not give you the best performance. A replicated datastore would offer the best performance, but would be more difficult to implement.

I wonder if we could use Phoenix presence for the replication, where each login and log out would be a presence change. I say this since Phoenix presence already handles multi node sync and fault handling.

jesseshieh commented 6 years ago

Maybe we can just "elect" the first node sorted alphabetically to be the master. That way if the master goes down, the next node in line will resume its responsibilities. What do you think?

smpallen99 commented 6 years ago

This seems to be a very common pattern for a clustered solution. So, I would expect this problem to already be solved in either the Elixir or Erlang community. Can you do a little digging to see if there is already a mechanism?

Perhaps it could be as simple as registering a global name. If the registration fails, then one of the nodes has already grabbed the name. Then, one every rpc call to the master we check for a failure and that node grabs the global name, becoming the master.

In other words, first to register is the master....

What do you think?