hamiltop / rethinkdb-elixir

Rethinkdb client in pure elixir (JSON protocol)
MIT License
498 stars 65 forks source link

Connection Pooling #32

Open hamiltop opened 9 years ago

hamiltop commented 9 years ago

Poolboy

Poolboy works by providing exclusive access to a given resource in a "transaction".

Pros:

Cons:

While my assumption was to use Poolboy, it seems that there are better approaches available. There are too many tradeoffs to be made for not a whole lot of benefit.

Hand rolled

An ideal connection pool will do the following:

  1. Connect to N clients in a cluster.
  2. Load balance queries between connections.
  3. Route query to master replica for the target table.
  4. If a connection dies, remove that connection from the rotation.
  5. If a query fails because of a server connectivity issue, retry query on another host.

Initial design:

A supervisor tree will be over N connections and a coordinator.

Client processes send queries directly to the connection, just like we currently do. In order to know which connection to contact, the client can request a connection from the coordinator. When the client is finished with the query it informs the coordinator (in order for the coordinator to properly balance connections).

This would require zero changes to Exrethinkdb.Connection. The connection would be dumb to pooling. We could provide a Exrethinkdb.ConnectionPool module with run/next/close that wrap the logic for interacting with the coordinator. The end API for the user would be the same. We will also provide a use Exrethinkdb.ConnectionPool macro. Ideally you would replace use Exrethinkdb.Connection with use Exrethinkdb.ConnectionPool and your application would still work perfectly.

Routing queries to proper replicas can be done fairly well in the coordinator by looking at table("foo_table") |> config |> run. The coordinator can do this on a timer (once every minute is probably sufficient). It can then use this information to route the connection properly. In the event of stale_reads, the coordinator can load balance between non-master connections. We'll have to add table to the Query struct, but that's pretty straightforward.

Exrethinkdb.ConnectionPool.run will report back to the coordinator the success or failure of a query. The coordinator will also be monitoring the Connection processes. There will also be logic to retry on a different server if a failure occurs (in the event of stale_reads. otherwise, retrying won't be useful until a new master is selected).

I think this custom approach will be fairly simple. It will be designed so that failure leads to a worst case scenario of routing requests to the less optimal replica.

jeregrine commented 9 years ago

Their is an issue open for Ecto that solves this by "reserving" a worker https://github.com/elixir-lang/ecto/issues/569 I'd be pretty strongly against hand rolling because this kind of thing is full of pitfalls. We could also ping @fishcakez and see if maybe his sbroker might be a stronger use case here.

On Wed, May 6, 2015 at 11:15 PM, Peter Hamilton notifications@github.com wrote:

Poolboy

Poolboy works by providing exclusive access to a given resource in a "transaction".

Pros:

  • Dynamic connection allocation is taken care of for you.

Cons:

  • Connection affinity is required for cursors, which means a transaction may be required for the full length of the cursor. For long running change feeds, this will require one connection per changefeed.
  • Pipelining is not utilized as each connection is checked out until a response is provided.

While my assumption was to use Poolboy, it seems that there are better approaches available. There are too many tradeoffs to be made for not a whole lot of benefit. Hand rolled

An ideal connection pool will do the following:

  1. Connect to N clients in a cluster.
  2. Load balance queries between connections.
  3. Route query to master replica for the target table.
  4. If a connection dies, remove that connection from the rotation.
  5. If a query fails because of a server connectivity issue, retry query on another host.

Initial design:

A supervisor tree will be over N connections and a coordinator.

Client processes send queries directly to the connection, just like we currently do. In order to know which connection to contact, the client can request a connection from the coordinator. When the client is finished with the query it informs the coordinator (in order for the coordinator to properly balance connections).

This would require zero changes to Exrethinkdb.Connection. The connection would be dumb to pooling. We could provide a Exrethinkdb.ConnectionPool module with run/next/close that wrap the logic for interacting with the coordinator. The end API for the user would be the same. We will also provide a use Exrethinkdb.ConnectionPool macro. Ideally you would replace use Exrethinkdb.Connection with use Exrethinkdb.ConnectionPool and your application would still work perfectly.

Routing queries to proper replicas can be done fairly well in the coordinator by looking at table("foo_table") |> config |> run. The coordinator can do this on a timer (once every minute is probably sufficient). It can then use this information to route the connection properly. In the event of stale_reads, the coordinator can load balance between non-master connections. We'll have to add table to the Query struct, but that's pretty straightforward.

Exrethinkdb.ConnectionPool.run will report back to the coordinator the success or failure of a query. The coordinator will also be monitoring the Connection processes. There will also be logic to retry on a different server if a failure occurs (in the event of stale_reads. otherwise, retrying won't be useful until a new master is selected).

I think this custom approach will be fairly simple. It will be designed so that failure leads to a worst case scenario of routing requests to the less optimal replica.

Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32.

hamiltop commented 9 years ago

I think table affinity is not going to be provided by existing solutions. Table affinity will cut down on network bottlenecks on the server side.

The ecto solution solves a very different problem of nested transactions. The solution they use (correct me if I'm wrong) is to permanently checkout a connection for a long lived process to use. The alternatives proposed are to provide the worker in the gen_call.

We are dealing with a very different set of constraints. The unit of exclusivity for a postgres connection is a full query/response or even a full transaction. For rethinkdb it's a query or a response. Since we can interleave queries and responses we only need exclusive access in very small chunks and the current implementation does it very well.

The performance bottleneck is purely server side. We will not have nor want multiple connections to the same host.

The hand rolled proposal is built on top of independently useful building blocks. The current connections are already done and work well. The coordinator is going to be necessary for table affinity in any solution we come across.

The ideal drop in solution would be something that: 1) keep track of established connections. 2) reconnect broken connections.

Number 2 is exactly what a supervisor is for. This is already an out of the box solution. The only problem we need to solve is how a client find out which connections currently are alive. I'm fairly certain which_children will go a long way in solving that.

I'm going to take a very lightweight pass at this and see how difficult it actually is.

On Thu, May 7, 2015, 8:26 AM Jason S. notifications@github.com wrote:

Their is an issue open for Ecto that solves this by "reserving" a worker https://github.com/elixir-lang/ecto/issues/569 I'd be pretty strongly against hand rolling because this kind of thing is full of pitfalls. We could also ping @fishcakez and see if maybe his sbroker might be a stronger use case here.

On Wed, May 6, 2015 at 11:15 PM, Peter Hamilton notifications@github.com wrote:

Poolboy

Poolboy works by providing exclusive access to a given resource in a "transaction".

Pros:

  • Dynamic connection allocation is taken care of for you.

Cons:

  • Connection affinity is required for cursors, which means a transaction may be required for the full length of the cursor. For long running change feeds, this will require one connection per changefeed.
  • Pipelining is not utilized as each connection is checked out until a response is provided.

While my assumption was to use Poolboy, it seems that there are better approaches available. There are too many tradeoffs to be made for not a whole lot of benefit. Hand rolled

An ideal connection pool will do the following:

  1. Connect to N clients in a cluster.
  2. Load balance queries between connections.
  3. Route query to master replica for the target table.
  4. If a connection dies, remove that connection from the rotation.
  5. If a query fails because of a server connectivity issue, retry query on another host.

Initial design:

A supervisor tree will be over N connections and a coordinator.

Client processes send queries directly to the connection, just like we currently do. In order to know which connection to contact, the client can request a connection from the coordinator. When the client is finished with the query it informs the coordinator (in order for the coordinator to properly balance connections).

This would require zero changes to Exrethinkdb.Connection. The connection would be dumb to pooling. We could provide a Exrethinkdb.ConnectionPool module with run/next/close that wrap the logic for interacting with the coordinator. The end API for the user would be the same. We will also provide a use Exrethinkdb.ConnectionPool macro. Ideally you would replace use Exrethinkdb.Connection with use Exrethinkdb.ConnectionPool and your application would still work perfectly.

Routing queries to proper replicas can be done fairly well in the coordinator by looking at table("foo_table") |> config |> run. The coordinator can do this on a timer (once every minute is probably sufficient). It can then use this information to route the connection properly. In the event of stale_reads, the coordinator can load balance between non-master connections. We'll have to add table to the Query struct, but that's pretty straightforward.

Exrethinkdb.ConnectionPool.run will report back to the coordinator the success or failure of a query. The coordinator will also be monitoring the Connection processes. There will also be logic to retry on a different server if a failure occurs (in the event of stale_reads. otherwise, retrying won't be useful until a new master is selected).

I think this custom approach will be fairly simple. It will be designed so that failure leads to a worst case scenario of routing requests to the less optimal replica.

Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32.

— Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32#issuecomment-99909472.

jeregrine commented 9 years ago

@hamiltop after talking with @ericmj and @fishcakez they've convinced me that maybe we don't need connection pooling and that I am probably wrong.

hamiltop commented 9 years ago

Table affinity is what's got me convinced I want to do some level of connection management. I hacked together a lightweight naive pool and the code is dead simple.

I skimmed the conversation in irc and I think your previous comment here about this being a smarter client then something like postgres is accurate and changes the discussion a bit.

I'll hopefully have my code up soon.

On Thu, May 7, 2015, 9:07 AM Jason S. notifications@github.com wrote:

@hamiltop https://github.com/hamiltop after talking with @ericmj https://github.com/ericmj and @fishcakez https://github.com/fishcakez they've convinced me that maybe we don't need connection pooling and that I am probably wrong.

— Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32#issuecomment-99923747.

hamiltop commented 9 years ago

https://github.com/hamiltop/exrethinkdb/blob/connection_pool/lib/exrethinkdb/connection/pool.ex

used like:

hosts = [[host: "localhost", port: 28015], [host: "localhost", port: 28015]]
{:ok, conn_pool} = Exrethinkdb.Connection.Pool.start_link([hosts: hosts])
conn = Exrethinkdb.Connection.Pool.get_connection(conn_pool)
Exrethinkdb.Query.table("people") |> Exrethinkdb.Connection.run conn
# alternatively, run from the conn_pool and it will pick a connection for you
Exrethinkdb.Query.table("people") |> Exrethinkdb.Connection.Pool.run conn_pool

It's naive, but accomplishes the basic goals. The optimizations would be in how get_connection works in order to provide table affinity.

As an already-implemented, cursors store the pid of their connection internally and will use that connection when next is called.

Linicks commented 9 years ago

@hamiltop, I really like the ideas that you've laid out for the hand rolled solution because it takes into account the unique characteristics of RethinkDB. Poolboy, and other technologies are going to make assumptions that are part of a different paradigm. I also like the fact that you were able to build a working version based on the code that's already been written. As @jeregrine mentioned, there are bound to be Gremlins lurking in the dark, but that seem to be the case for just about everything in this line of work. At this point, I can't think of a good reason not to run with your solution.

hamiltop commented 9 years ago

Some interesting results:

Three configurations:

  1. Single local connection (single)
  2. A Pool of 4 connections to localhost (pool)
  3. A pool of 4 connections to localhost plus 1 connection to a remote host (cluster). The remote host has significant network latency (this is going from my laptop to my iMac over WiFi).

All of these were run with use_outdated: true. Without allowing outdated reads, the cluster mode has no real benefit if you aren't sharding your data. If you shard your data, the results are similar to use_outdated: true in that the clustering has benefit.

The first test was a non-indexed read in a table of 10k records (looking for one record), repeated 40 times across 40 Tasks for a total of 1600 queries and 1.6M reads (non-indexed reads count one read per record). This was very much CPU bound in rethinkdb.

Real results now
{:pool, {24016272, :ok}}
{:single, {23215845, :ok}}
{:cluster, {17805260, :ok}}
Again, reversed
{:cluster, {18998542, :ok}}
{:single, {23591027, :ok}}
{:pool, {23255329, :ok}}

The next test was for an indexed get in the same table, but repeated 4000 times across 40 Tasks for a total of 160000 queries and 160000 reads (one read per get). This was CPU bound on the elixir side.

Real results now
{:pool, {19722294, :ok}}
{:single, {20719733, :ok}}
{:cluster, {35041397, :ok}}
Again, reversed
{:cluster, {30579648, :ok}}
{:single, {20782334, :ok}}
{:pool, {20139856, :ok}}

Conclusions For heavy non-indexed reads (or anything CPU bound in the DB) clustering is a benefit. For heavy indexed reads (or anything light on CPU in the DB) clustering slows things down (or at best doesn't really help much).

Additionally, fine tuning the pool is important. When I ran the cluster test with 1 local and 1 remote connection it was much slower than when I ran it with 4 local and 1 remote. This was in a very lopsided network, but I've encountered lopsided clusters aplenty in production environments.

I think the takeaway is that a developer will need to weight hosts in the pool according to their requirements. The configuration should be something like:

[
  [host: "db1", port: 28015, weight: 2],
  [host: "db2", port: 28015, weight: 3],
  [host: "db3", port: 28015, weight: 1]
]

Additionally, table affinity is definitely required. Hitting the connection that isn't the master for the table was costly.

Linicks commented 9 years ago

The results are exactly what I would expect them to be given the parameters you described. In my mind this seems to indicate that initial design is working properly. This will add the flexibility needed to get the most out of RethinkDB and the application. A couple of things that I was thinking about were pools that had equally weighted hosts. For example, if you have a RethinkDB cluster in data-center A, and another RethinkDB Cluster in data-center B, it seems logical to have the following configuration:

[ [host: "dca_db1", port: 28015, weight: 0], [host: "dca_db2", port: 28015, weight: 0], [host: "dca_db3", port: 28015, weight: 0] [host: "dcb_db1", port: 28015, weight: 1], [host: "dcb_db2", port: 28015, weight: 1], [host: "dcb_db3", port: 28015, weight: 1] ]

This would allow the application servers in data-center A to utilize the local network for best performance, but fall back to the cluster in data-center B. Given that they are equally weighted in each group, I would expect that the load would be balanced among them.

I have been kicking around the idea of adding tags that could be used to specify things like racks, data-centers, etc, but the weight parameter may cover most cases. At this point I'm not sure if they would help or hurt, but I thought the idea was at least worth mentioning.

--Nick

hamiltop commented 9 years ago

weight: 0 would need to be a special case, as otherwise weights are traditionally applied as P(use this host) = (weight of host)/sum(weights) which in the case of weight 0 would be 0% probability. We could probably special case it so that is sum(weights) is 0 the we just round robin.

I did find a bug in some code not pushed yet. :random.uniform has a state local to each process. The current code (my new code has not been pushed yet) resulted in all processes using the same round robin ordering of connections. The tests were very useful in exposing that.

On Fri, May 8, 2015 at 1:29 PM Nick notifications@github.com wrote:

The results are exactly what I would expect them to be given the parameters you described. In my mind this seems to indicate that initial design is working properly. This will add the flexibility needed to get the most out of RethinkDB and the application. A couple of things that I was thinking about were pools that had equally weighted hosts. For example, if you have a RethinkDB cluster in data-center A, and another RethinkDB Cluster in data-center B, it seems logical to have the following configuration:

[ [host: "dca_db1", port: 28015, weight: 0], [host: "dca_db2", port: 28015, weight: 0], [host: "dca_db3", port: 28015, weight: 0] [host: "dcb_db1", port: 28015, weight: 1], [host: "dcb_db2", port: 28015, weight: 1], [host: "dcb_db3", port: 28015, weight: 1] ]

This would allow the application servers in data-center A to utilize the local network for best performance, but fall back to the cluster in data-center B. Given that they are equally weighted in each group, I would expect that the load would be balanced among them.

I have been kicking around the idea of adding tags that could be used to specify things like racks, data-centers, etc, but the weight parameter may cover most cases. At this point I'm not sure if they would help or hurt, but I thought the idea was at least worth mentioning.

--Nick

— Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32#issuecomment-100353032 .

fishcakez commented 9 years ago

You have a couple of issues regarding failure handling with the proposed pooling method:

The case where all connections are unavailable isn't handled. This could happen if all connections are :restarting, in which case a client will crash when generating the random integer. Perhaps that is desired.

A supervisor blocks while starting a child. In the current implementation connect and handshake occur in the init/1 callback. This means that the supervisor/the pool is blocked and all client processes requesting a worker are blocked too while a single connection process starts.

If one of the rethinkdb servers is down or there is a issue with connecting/sending data, a connection process will fail to connect. This will either result in the supervisor blocking for a significant time while a connection process tries to start (as above) or more likely the supervisor will shutdown because the max_restarts limit was reached because a connection process keeps failing to start in rapid succession.

Finally the poolling strategy does not provide any backpressure or load shedding to client processes beyond the connection processes' message queues. Designing around a minimal number of connection processes that pipeline requests makes this is a potential bottleneck.

hamiltop commented 9 years ago

Thanks for chiming in @fishcakez

The case for "unable to acquire a connection" has not been addressed at any level yet, but either a "let it crash" philosophy or responding with {:error, msg} should suffice for both the individual connection case and the pool case.

To check my understanding, the init function of a GenServer is run as part of GenServer.start_link and therefore it causes the supervisor to block. Is it a common pattern to send oneself a :connect message to be processed immediately after init? How is this otherwise handled?

How is the max_restarts issue (where one buggy child kills all the other healthy children as it brings down the supervisor) solved elsewhere? By not using supervision?

Backpressure is applied to individual clients through GenServer.call. The client process will block until it receives a response. This does nothing to address an unbounded number of client processes. It also does not deal with load shedding, as stated. Would a simple solution be for an individual connection to keep track of active clients and respond :too_busy? The client should then try another connection or in the case of not using a connection pool just fail. Change feeds would need to be accounted for differently because they don't have the same resource footprint as active connections.

While I'm exploring the idea of a custom pool with table affinity, I'm committed to making Connection compatible with existing connection pooling solutions like poolboy. Not using pipelining isn't the end of the world, and this is how it is done in other languages. This custom pooling is an optimization. Perhaps there's a hybrid solution that can be found.

Linicks commented 9 years ago

" We could probably special case it so that is sum(weights) is 0 the we just round robin."

-- That is the behavior that I had in mind with my sample configuration. Would it be possible to round robin any weights that are the same? In my example config above, the idea was to round robin the primary cluster, then fail over to another cluster and round robing those connections. If that's not possible, I suppose the config could look like this, as long as 0 was selected before the higher number weights:

[ [host: "dca_db1", port: 28015, weight: 0], [host: "dca_db2", port: 28015, weight: 0], [host: "dca_db3", port: 28015, weight: 0] [host: "dcb_db1", port: 28015, weight: 1], [host: "dcb_db2", port: 28015, weight: 2], [host: "dcb_db3", port: 28015, weight: 3] ]

"This custom pooling is an optimization. Perhaps there's a hybrid solution that can be found." -- Darn those gremlins :)

hamiltop commented 9 years ago

The only two extension points I'd like are:

  1. Oversubscribing checkouts and connections so we can take advantage of pipelining.
  2. Custom pooling strategies (so table affinity can be done).

It may be worth seeing if an existing solution can be adapted. It may also be useful to dive down the rabbit hole and see how riak does client side routing.

On Fri, May 8, 2015 at 4:10 PM Nick notifications@github.com wrote:

" We could probably special case it so that is sum(weights) is 0 the we just round robin."

-- That is the behavior that I had in mind with my sample configuration. Would it be possible to round robin any weights that are the same? In my example config above, the idea was to round robin the primary cluster, then fail over to another cluster and round robing those connections. If that's not possible, I suppose the config could look like this, as long as 0 was selected before the higher number weights:

[ [host: "dca_db1", port: 28015, weight: 0], [host: "dca_db2", port: 28015, weight: 0], [host: "dca_db3", port: 28015, weight: 0] [host: "dcb_db1", port: 28015, weight: 1], [host: "dcb_db2", port: 28015, weight: 2],

[host: "dcb_db3", port: 28015, weight: 3] ]

"This custom pooling is an optimization. Perhaps there's a hybrid solution that can be found." -- Darn those gremlins :)

— Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32#issuecomment-100391625 .

hamiltop commented 9 years ago

https://github.com/inaka/worker_pool/compare/master...marianoguerra:issue_16_custom_strategies seems to be interesting. I'd rather use Poolboy just for experience and expertise within the Elixir ecosystem.

number 1 from my previous comment could probably be accomplished via some hack layer on top of a custom strategy. Something like "when creating a worker, wrap an existing connection if X and Y else start a new connection".

ericmj commented 9 years ago

Not using pipelining isn't the end of the world, and this is how it is done in other languages.

You can still pipeline without using pools, either by making the pipelining explicit or by having a Task like API where you can wait for multiple pipelined queries to complete.

hamiltop commented 9 years ago

Tasks are something I'd like to add anyway.

I don't think I'll be intellectually satisfied without oversubscription, but we can get to practical and useful without it.

As far as table affinity, anything wrong with N pools, one per server? Then we can bolt a layer on top pointing the client to the correct pool for a given query (with fallback to any pool if needed).

On Sat, May 9, 2015, 4:30 AM Eric Meadows-Jönsson notifications@github.com wrote:

Not using pipelining isn't the end of the world, and this is how it is done in other languages.

You can still pipeline without using pools, either by making the pipelining explicit or by having a Task like API where you can wait for multiple pipelined queries to complete.

— Reply to this email directly or view it on GitHub https://github.com/hamiltop/exrethinkdb/issues/32#issuecomment-100467326 .

hamiltop commented 9 years ago

Re: Tasks

https://github.com/hamiltop/exrethinkdb/issues/33

fishcakez commented 9 years ago

To check my understanding, the init function of a GenServer is run as part of GenServer.start_link and therefore it causes the supervisor to block. Is it a common pattern to send oneself a :connect message to be processed immediately after init? How is this otherwise handled?

Exactly how you describe, casting a reconnect message to self in init/1 is very common.

How is the max_restarts issue (where one buggy child kills all the other healthy children as it brings down the supervisor) solved elsewhere? By not using supervision?

The fault tolerance in OTP comes about by isolating errors at suitable levels. A supervision tree does this in layers, errors slowly bubble up the supervision tree as processes keep crashing. Hopefully at some point as close to the bottom of the tree as possible the state is fixed and normal service resumes.

Not being able to connect to the database (network partition) is a realistic error and it is reasonable to handle it. Ideally this error is contained so that connections to other databases are not effected. If the other connections are shutdown because the max restart limit is reached the fault has spread further than is necessary.

A circuit breaker or fuse type system (such as https://github.com/jlouis/fuse) can be used to raise an alarm (http://www.erlang.org/doc/man/alarm_handler.html) and throttle retries, possibly backing off between attempts (such as https://github.com/ferd/backoff).

Would a simple solution be for an individual connection to keep track of active clients and respond :too_busy?

Yes.

I'm committed to making Connection compatible with existing connection pooling solutions like poolboy.

You could try a slightly different take on the usual semantics if the max_overflow is set to 0. The poolboy transaction need only be to setup the asynchronous request (or task as in #33). It would not have to remain for the lifetime of the request (i.e. checkin the connection and then await the result). I am unsure what performance would be like but it might be interesting to try as it would not require changing your Connection process.

hamiltop commented 9 years ago

The original goal of connection pooling pre 1.0.0 was that Exrethinkdb.Connection should work without any modifications with a connection pooling system. I think this discussion has been very useful. I think with a few modifications to Exrethinkdb.Connection, getting poolboy to work is just a matter of following some guidelines around configuration.

Modes of use

  1. Traditional Pool. Check out a connection and check it back in when you are done. For changefeeds and other queries that are tied to specific connections, you need to keep the connection checked out until you close the feed. Otherwise, that connection might be terminated or restarted. You can, however, use that checked out connection in as many processes as you like. This will be especially useful for tasks.
  2. Static Pool. max_overflow is set to 0. When you check out a connection you can check it back in immediately. After checking it in, you are still fully enabled to make requests on that connection. Changefeeds and other queries just work, no need to worry about hogging the connections in the pool.

I think the original goal of solving identified problems and then providing an example of how to use it with poolboy is still the correct approach. The other issue discussed here is table affinity and I'm not going to include smart routing in 1.0.0 for now.

Identified problems

  1. Blocking in init. New issue opened at https://github.com/hamiltop/exrethinkdb/issues/34
  2. Connection failure. New issue opened at https://github.com/hamiltop/exrethinkdb/issues/35
  3. Throttling/Back Pressure/Load Shedding. New issue opened at https://github.com/hamiltop/exrethinkdb/issues/36

Any discussions on those topics should be moved to the issues opened. Any new issues can and should be discussed in this thread.

hamiltop commented 8 years ago

I'm going to reduce this problem down to a fairly simple one. Table affinity is not necessary with a rethinkdb-proxy. Throttling/Back Pressure/Load Shedding is still a problem, as is handling timeouts properly. It should work with poolboy out of the box, but that has yet to be tested.

hamiltop commented 8 years ago

RethinkDB.Connection works fine with poolboy right now:

pool_options = [name: {:local, :hello_pool}, worker_module: RethinkDB.Connection, size: 10, max_overflow: 0]
:poolboy.start_link(pool_options, [])
a = :poolboy.checkout(:hello_pool)
RethinkDB.Query.table_list |> RethinkDB.Connection.run(a)

Some advisable defaults:

size == max_overflow EDIT: @fishcakez points out that we want max_overflow = 0. He's right. Cursors, changefeeds, and some other queries are tied to a connection process. If that connection is terminated, then the query will fail. By setting max_overflow to zero, you can be a little loose about mutual exclusion and check a connection back in even though you are still using it. Poolboy won't terminate the connection. The driver itself won't care, it properly multiplexes requests.

Changefeeds Since changefeeds are persistent, mixing them into the pool makes for some difficult balancing. It might be worthwhile to create a separate pool for changefeeds in your application.

I may create a RethinkDB.Pool library with theses defaults, but I don't want to make the driver dependent on poolboy.

hamiltop commented 8 years ago

Removing 1.0.0 milestone. The goal was to have a connection that worked easily with poolboy, which is what we have.

hamiltop commented 8 years ago

Recently discovered that RethinkDB has core <-> connection affinity. So while the Elixir side scales well with multiple clients, a single connection is going to perform slightly worse on the server side. Connection pooling should be a priority.

darkredz commented 8 years ago

Agreed that Connection pooling should be a priority, it is kinda frustrating without this feature

hamiltop commented 8 years ago

How is it frustrating? Can you provide some examples?

On Mon, Apr 18, 2016, 2:56 AM Leng Sheng Hong notifications@github.com wrote:

Agreed that Connection pooling should be a priority, it is kinda frustrating without this feature

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/hamiltop/rethinkdb-elixir/issues/32#issuecomment-211305076

hamiltop commented 8 years ago

I updated the README to show how to use poolboy out of the box.

almightycouch commented 8 years ago

I have been working on a adapter for Ecto for a while (:rethinkdb_ecto). Currently it supports most of the Ecto DSL (including basic support for joins), storage and migrations.

As Ecto 2.0 requires adapters to support DBConnection, I rewrote RethinkDB.Connection to implement the DBConnection behavior. It is almost ready now (SSL support missing, need to rewrite some tests).

Basically, DBConnection uses Connection as the default pool implementation. It also has built-in support for :poolboy and :sbroker. Which means that moving from Connection to DBConnection should not break things.

Right now, the only thing you have to do to get started is to replace

worker(RethinkDB.Connection, [[name: :foo]])

with

supervisor(RethinkDB.Connection, [[name: :foo]])

To use a different pool implementation (:poolboy for example), we could implement something like this:

defmodule Repo do
  use RethinkDB.Connection, pool: DBConnection.Poolboy
end

If you are interested in using DBConnection, I will get my fork ready (fix a few things) and create a pull request.

Note that i also implemented the new V1_0 handshake (RethinkDB >= 2.3.0) with support for user authentication (SCRAM). I skipped both V0_3 and V0_4 which may be required to connect to older version of RethinkDB.

hamiltop commented 8 years ago

Awesome! I appreciate the help.

My concerns are:

  1. No required dependency on Poolboy or sbroker. We should make those optional (shouldn't be hard).
  2. Should work with 2.0.0+
  3. Persistent checkouts from the pool and oversubscription are a priority. One changefeeds per connection is not going to scale well. We need a mechanism to oversubscribe. Poolboy doesn't really handle that but sbroker might. A possible convention might be to create two pools one for changefeeds and one for normal queries.

Otherwise, please submit the PR! Love the help.

On Tue, Apr 19, 2016, 5:31 AM Mario Flach notifications@github.com wrote:

I have been working on a adapter for Ecto for a while (:rethinkdb_ecto https://github.com/almightycouch/rethinkdb_ecto). Currently it supports most of the Ecto DSL (including basic support for joins), storage and migrations.

As Ecto 2.0 requires adapters to support DBConnection, I rewrote RethinkDB.Connection to implement the DBConnection behavior https://github.com/almightycouch/rethinkdb-elixir/commit/34e53e505026c2764c549d0c31916bd28437494f. It is almost ready now (SSL support missing, need to rewrite some tests).

Basically, DBConnection uses Connection as the default pool implementation. It also has built-in support for :poolboy and :sbroker. Which means that moving from Connection to DBConnection should not break things.

Right now, the only thing you have to do to get started is to replace

worker(RethinkDB.Connection, [[name: :foo]])

with

supervisor(RethinkDB.Connection, [[name: :foo]])

To use a different pool implementation (:poolboy for example), we could implement something like this:

defmodule Repo do use RethinkDB.Connection, pool: DBConnection.Poolboyend

If you are interested in using DBConnection, I will get my fork ready (fix a few things) and create a pull request.

Note that i also implemented the new V1_0 handshake (RethinkDB >= 2.3.0) with support for user authentication (SCRAM). I skipped both V0_3 and V0_4 which may be required to connect to older version of RethinkDB.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/hamiltop/rethinkdb-elixir/issues/32#issuecomment-211902104

fishcakez commented 8 years ago

DBConnection does provide pooling (and poolboy and sbroker are optional dependencies) and the behaviour with pooling should be the same as without pooling, it just a case of adding the pool: module option to choose the pool. This means a connection is always limited to one client process.

Ecto 2.0 does not require all adapters to use DBConnection, only the SQL adapters (as DBConnection is required for the (concurrent) sandbox).

I have no idea about changefeeds but I assume they are approximately pubsub. This won't play well with DBConnection unfortunately because the socket is passed to the client process on each call. Postgrex uses a different process to support postgres notifications.

I am not sure what the best strategy for pooling changefeeds is. The simplest strategy might be to use poolboy and only carry out the subscription to a changefeed in the poolboy transaction (checkin/checkout). Then remain subscribed to the changefeed after checking in the process to poolboy. Likely it would be a good idea to use a :fifo queue otherwise one changefeed may end up with all the subscriptions. This would allow pooling and not pooling changefeeds. I am not sure if this will be good enough but hopefully it is. Whereas using sbroker would mean always pooling.

hamiltop commented 8 years ago

Yeah, I noticed that DBConnection passing the socket to the client process means we can't multiplex queries.

In an ideal world I think a tunable "oversubscription" setting would be what we want. Then maintain two pools, one for changefeeds and one for normal queries. Even in the query case, oversubscription on the connection is efficient.

fishcakez commented 8 years ago

Shackle is a pooling library that supports multiplexing: https://github.com/lpgauth/shackle. It works quite differently to DBConnection but provides similar abstractions.

almightycouch commented 8 years ago

Is your fork ready for a discussion of merging into the mainline?

Hi @hamiltop, The fork is ready to be merged without conflics.

All the tests are passing successfully with the exception of 5 tests in test/connection_test.exs which must be rewritten to work with the DBConnection behaviour. Some new test should also be written to test the connection pools.

There are a few implementation details that should be discussed thought:

hamiltop commented 8 years ago

Opened #106 to discuss