brianc / node-postgres

PostgreSQL client for node.js.
https://node-postgres.com
MIT License
12.37k stars 1.23k forks source link

The connection pool sucks #227

Closed brianc closed 11 years ago

brianc commented 11 years ago

Hello -

I think the client object interface is pretty good. What is not a good interface is the current incarnation of the pool. Namely pg.connect. Here are some reasons why:

First of all, as a quick aside. The pooling was added to the module before the there was a lot of wisdom around "tiny modules that just do one thing." Sooo really it would be better to have the pool in a completely separate module. I would like, probably, to start developing the pool off in it's own module and maybe eventually rip the built in stuff out...but I don't like breaking APIs. Anyways....

I think a good API makes the right thing easy and the wrong thing hard. Currently the pool is just about opposite of that.

Here are some examples of the pool sucking:

dead clients

Because the pool waits for a client's query queue to drain, and once the client query queue is drained the client is automatically returned, if you never send a query to the client, the client is never returned. Eventually your pool is all checked out and idle and your app hangs. Here is an example of this total bullshit.

app.get('/', function(req, res, next) {
  pg.connect(app.constring, function(err, client) {
    if(!req.session) {
      return next();
      //way to go partner. you just leaked a connection
    }
    client.query('SELECT * FROM customers WHERE id = $1', [req.session.id], function(err, result) {
      var html = app.render('home', {user: JSON.stringify(result.rows[0])});
      res.send(html);
    })
  });
});

transactions

On the flip side of dead connections, greedy pool release mechanics of 'the first drain event' cause the client to get pushed back into the pool of available clients way too early.

pg.connect(/*constring*/, function(err, con) {
  con.query('begin', function(err) { //start our transaction
    con.query('update bla', function(err) {
      setTimeout(function() {
        //way to go partner, you returned this connection to the pool with an open transaction
        //better hope no one else wants to run a query for 2 seconds or some weird stuff is gonna happen
        con.query('commit');
      }, 2000);
    });
  });
});

That right there is an example of the 'auto release' being too greedy and not greedy enough.

I am going to fix this

I'd like to open up discussion around a different API for the pool. After 24/48 hours I'll get started on a new implementation. So here's what I'm thinking....this would be backwards compatible by adding a 3rd parameter to the pg.connect function. A callback you call when you want to release the client back to the pool. The callback itself will take a single error argument in the style of node. If the argument is non-null, the client will be closed and removed from the pool instead of returned to the pool. This is how it would look:

pg.connect(/*constring*/, function(err, client, done) {
   //hmmm....turns out I actually don't need this client
  done();
});

or what about doing a transaction? currently you have to do some nasy stuff with pauseDrain() and resumeDrain() which are themselves somewhat cludgey hacks I put on because I was mentally exhausted and planning a home remodel + my wedding. Well, I'm back now, refreshed, and want to make things better and all of our lives easier...

pg.connect(/*constring*/, function(err, client, done) {
  client.query('begin', function(err) {
    if(err) return callback();
    client.query('update something', function(err) {
      client.query('commit', function() { done() });
    })
  });
});

Also, the 1st parameter (connection info) is optional. I think node-postgres should use the same environment variables as the psql command to get default connection info, and then supplement with pg.defaults and then override with custom stuff passed in. So if you use defaults or the environment variables, you can do this:

pg.connect(function(client, done() {
  done();
});

The pool needs a cleaner way to shut itself down too without being destructive. I'm not sure how that should work currently.

feedback

I needs your feedback on this. Let's hear it!

freewil commented 11 years ago

pg.connect(/constring/, client, done) { //hmmm....turns out I actually don't need this client done(); });

Typos on the pg.connect() syntax?

pg.connect(/*constring*/, function(err, client, done) {
   //hmmm....turns out I actually don't need this client
  done();
});

Instead of adding done to the pg.connect() callback, what do you think about just adding client.done() as the callback? This would be similar to client.end() that you would use if creating the connection manually, but this would be for pooling only.

pg.connect(/*constring*/, function(err, client) {
   //hmmm....turns out I actually don't need this client
  client.done();
});
brianc commented 11 years ago

ha! that's a pretty ridiculous typo. My bad. Fixed 'em up.

I thought about adding a client.done() method, but it seems kinda dirty to the client object...much like pauseDrain() and emitDrain() are kinda dirty in that they're only really useful in a pooling situation. I know 80% of the time it makes sense to pool the client, but...I dunno...just feels weird to me. And then do you make the .done() method a no-op on non-pooled clients or just not include it at all? Then you have a different interface on the object if it is or isn't pooled. Another option could be:

pg.aquire(/*constring*/, function(err, client) {
  client.query('bla bla', function(err, res) {
    pg.release(client);
  });
});
freewil commented 11 years ago

And then do you make the .done() method a no-op on non-pooled clients or just not include it at all?

Should probably throw an informative error such as cannot call done() on non-pooled client, use end() to close a non-pooled connection

Another option could be:

pg.aquire(/*constring*/, function(err, client) {
  client.query('bla bla', function(err, res) {
    pg.release(client);
  });
});

I really like the acquire() and release() methods since it makes it much more clear to a newbie that they are using a different API (pooled connections) rather than a non-pool.

grncdr commented 11 years ago

Full disclosure: I am the author of a library (any-db) that implements driver-agnostic client/connection objects, pooling and transactions.

I think the client object interface is pretty good. What is not a good interface is the current incarnation of the pool.

Bluntly, connection pooling should be removed from pg entirely. Moving what is essentially an application-level concern out of the driver implementations enables third party libraries to address it, possibly in a generic way or as part of an ORM etc. Understandably, not everybody will want the overhead of an adapter layer like any-db, but even moving the current pool into a separate pg-client-pool library makes sense from a design & maintenance standpoint IMO.

If nothing else, I hope the APIs of any-db can serve as another reference point for this discussion, so I've explained below how any-db deals with the problems you outlined.

Connections vs. Pools

Both of them have a query method that acts exactly like the existing postgres query method. The difference is that calling query on a pool returns the connection to the pool immediately after the query ends. They also both have begin methods that return a new transaction object.

Transactions

I've reified transactions as an object that acts like a connection with additional commit and rollback methods. You are guaranteed that every call to query on a transaction object happens in a single transaction.

Examples

dead clients

When you query against a ConnectionPool your code never sees a client directly:

var pool = pg.createPool('...')
pool.query('SELECT ...', function (err, res) {
  // no client means no dead client.
})

transactions

When you hide the connection inside a JS object that represents a transaction, you don't have to worry about anything happening to your connection that you don't explicitly request of that object:

var pool = pg.createPool('...')
function doStuffInTransaction (done) {
  pool.begin(function (err, tx) {
    if (err) return done(err)
    tx.query('...')
    tx.query('...')
    tx.commit(done)
  })
}
brianc commented 11 years ago

@grncdr I recently saw any-db. It looks really good! I think for a slightly higher level API it makes total sense, and I definitely plan on pointing anyone in that direction in the future if they are looking for a pooling lib w/ some nice helpers.

If you need contributions on anything in particular I'd love to help you out.

I also completely agree with you the pooling should be removed from node-postgres completely. It definitely belongs in it's own module. The yucky part is going to be deprecating pg.connect and friends.

I also am a fan of creating a pool explicitly rather than having the pooling library create one magically for you.

I think there might be room for a very basic pool without the query and begin methods...though I do like those. I just think there are legitimate cases where you'd rather do stuff with your client directly rather than just run queries. Much of this stuff would probably be postgres specific. e.g. listen/notify and piping raw binary. The dependency graph I'm envisioning is something like this:

node-postgres <- pg-pool <- any-db. Then I'll deprecate the following:

  1. pg.connect
  2. pg.on('error')
  3. pg.end()
  4. client#pauseDrain()
  5. client#resumeDrain()

Deprecating methods sucks. It's annoying to everyone using the features, even if the features are cumbersome. Not looking forward to that. :frowning:

@freewil I kinda like the aquire and release explicitness as well. The only concern I have is it not being a drop in, backwards compatible replacement for what currently exists. I am gonna spike the pool library tomorrow and we can keep discussing API's for the pooling over there.

grncdr commented 11 years ago

The connection pool in any-db does also have acquire and release methods that behave exactly you'd expect, it just adds some helpers for the two most common cases (isolated queries and transactions).

As far as any-db & pg goes, the pg driver is actually the closest to the API of any-db of any of the drivers. I've opened a pull request (#228) for the one change in behaviour I would need to not have to wrap it.

brianc commented 11 years ago

Dang, if any-db already has all that do you think it might be more helpful to just contribute to any-db versus start another stand-alone pool specific for node-postgres? I'm going to merge your pull req after work today and dig deeper into the any-db code.

grncdr commented 11 years ago

It makes sense to me, and I'd love it if there were more contributors to any-db. My one caveat is that any-db is very immature compared to the pg driver. It works as far as it's been tested, and I use it in a production application (with pg), but I'd definitely want more eyes on the code (and more tests) before it's promoted as "the way" to do connection pooling with postgres.

That being said, all of the above applies to creating a new connection pool from scratch, and any-db is already pretty far along.

FAQinghere commented 11 years ago

I took a look at any-db.

First, the idea of a generic interface that proxies queries to the underlying adapter is fantastic, and I agree that it should provide transactions and pooling.

I would, however, refactor out the pooling. Perhaps along the lines of:

https://github.com/coopernurse/node-pool via: http://blog.argteam.com/coding/node-js-postgres-pooling-revisited-with-transactions/

A bigger issue, however, is that the interface should accept any correctly formatted query, and proxy that through to the underlying adapter. The generic interface should not massage the query itself (of course it can wrap it, etc.). Generating correct SQL queries for the syntax/adapter desired should be the responsibility of the application (perhaps taking advantage of a sql-builder that will already be set to generate 'correct' queries).

This seriousness of this is immediately seen in using any-db with pg, as we would not be able to use the object query syntax, and thus lose the valuable functionality that named statements give us. Likewise, there is no reason to massage parameter placeholders at the interface layer, as it would also have already been handled by a proper sql-builder or the application.

The goal should be:

AnyCorrectlyFormattedQueryForTheDesiredAdapter -- a generic interface (any-db?) -- TheDesiredAdapter

What do you guys think?

By the way, the API for any-db looks perfect to me. I'd keep that exact API conn.query(correctlyFormatterQuery... , callback)

FAQinghere commented 11 years ago

@grncdr

Great job stripping any-db back down to its core functionality (a common db query interface, pooling, and transactions). I have already started using any-db to proxy db queries between an ORM and pg.

btakita commented 11 years ago

Hello, I'm having connection pool problems. I stripped out my libraries from the stack trace.

Trace: error: sorry, too many clients already
  at adjustCallback (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:187:7)
  at exports.Pool.me.acquire (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:229:13)
  at PG.connect.pools.(anonymous function).genericPool.Pool.create.connectError (/Users/btakita/workspace/api-account/node_modules/pg/lib/index.js:59:9)
  at g (events.js:193:14)
  at EventEmitter.emit (events.js:93:17)
  at p.connect (/Users/btakita/workspace/api-account/node_modules/pg/lib/client.js:121:14)

I'm confused over the status of the connection pooling idioms and what the recommended approach is. I'd like to fix this issue in a way that follows the recommended approach.

grncdr commented 11 years ago

@btakita it looks like you're hitting the connection limit on the Postgres server and it really doesn't have anything to do with the connection pooling itself. Do you know what your max_connections setting for postgres is and how many clients you already have connected?

To clarify, this issue is about the usage of the connection pool in postgres being somewhat awkward for users. It works fine, but has some odd edge case issues.

btakita commented 11 years ago

My max_connections is set to 20. The script that I'm running creates many clients.

This issue seems to occur right after queueing of connections stop and connections start becoming free. Sorry, I added some of my own logging.

INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11073 available=0
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11074 available=0
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11075 available=0
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249917
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11075 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
Trace: error: sorry, too many clients already
  at Messages.extend.loadPresenterLiveCache.ctx.feedItemsQuery.on.waitForAllAdds (/Users/btakita/workspace/api-account/collections/feed_items.js:146:23)
  at Message.extend.author._.extend.message (/Users/btakita/workspace/api-account/models/feed_item.js:26:9)
  at Model.extend.findById (/Users/btakita/workspace/api-account/models/user.js:292:9)
  at exports.query (/Users/btakita/workspace/api-account/node_modules/pg-simple/index.js:15:7)
  at adjustCallback (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:187:7)
  at exports.Pool.me.acquire (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:229:13)
  at PG.connect.pools.(anonymous function).genericPool.Pool.create.connectError (/Users/btakita/workspace/api-account/node_modules/pg/lib/index.js:59:9)
  at g (events.js:193:14)
  at EventEmitter.emit (events.js:93:17)
  at p.connect (/Users/btakita/workspace/api-account/node_modules/pg/lib/client.js:121:14)
Trace: error: sorry, too many clients already
  at Messages.extend.loadPresenterLiveCache.ctx.feedItemsQuery.on.waitForAllAdds (/Users/btakita/workspace/api-account/collections/feed_items.js:146:23)
  at Message.extend.author._.extend.message (/Users/btakita/workspace/api-account/models/feed_item.js:26:9)
  at Model.extend.findById (/Users/btakita/workspace/api-account/models/persona.js:41:9)
  at exports.query (/Users/btakita/workspace/api-account/node_modules/pg-simple/index.js:15:7)
  at adjustCallback (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:187:7)
  at exports.Pool.me.acquire (/Users/btakita/workspace/api-account/node_modules/pg/node_modules/generic-pool/lib/generic-pool.js:229:13)
  at PG.connect.pools.(anonymous function).genericPool.Pool.create.connectError (/Users/btakita/workspace/api-account/node_modules/pg/lib/index.js:59:9)
  at g (events.js:193:14)
  at EventEmitter.emit (events.js:93:17)
  at p.connect (/Users/btakita/workspace/api-account/node_modules/pg/lib/client.js:121:14)
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249954
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11072 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249973
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11054 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249974
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11053 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249975
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11052 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249976
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11051 available=1
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - reusing obj
VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - timeout: 1358186249978
INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11050 available=1
btakita commented 11 years ago

AFAICT, it seems like the connections stop being created at 20...

  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=1 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=1
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=1 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=2
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=2 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=3
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=3 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=4
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=4 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=5
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=5 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=6
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=6 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=7
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=7 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=8
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=8 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=9
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=9 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=10
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=10 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=11
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=11 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=12
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=12 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=13
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=13 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=14
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=14 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=15
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=15 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=16
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=16 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=17
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=17 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=18
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=18 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=19
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=19 available=0
  VERBOSE pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() - creating obj - count=20
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=20 available=0
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=21 available=0
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=22 available=0
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=23 available=0
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=24 available=0
  INFO pool postgres://postgres:@localhost:5432/sumuru_dev - dispense() clients=25 available=0
grncdr commented 11 years ago

could you include the code that is creating connections / making queries? This still looks like the server running out of available connections. Also, have you set pg.defaults.poolSize to something other than the default value? (10)

btakita commented 11 years ago

Yes, I set the poolSize to 20. It seems like it stops creating connection objects at 20 as well.

var self = this, ctx = {};

function main() {
  pg.connect(config.database_url, function(err, pgClient) {
    if (err) {
      options.error(self, err);
    } else {
      ctx.pgClient = pgClient;
      ctx.author = model.get('author');
      // ...
    }
  });
}

main();
btakita commented 11 years ago

I can debug this. I just don't know if I'm using the correct API. This discussion makes it sound like pg.connect is deprecated and I should be using any-db.

However, I noticed that node-postgres uses generic-pool. I'm confused over the state of this project, specifically over connection pooling.

grncdr commented 11 years ago

I can't speak for @brianc, but I'd say give any-db a try, the API is fairly stable at this point and it should be totally usable.

That being said, the connection pooling in pg is not deprecated yet, and is known to work well for a lot of people, so it may make sense to stick with that depending on your project.

btakita commented 11 years ago

I have live requests and have seen issues in the wild related to this.

It appears that any-db has it's own postgres adapter. Is that true? Will I have to manually release connections that I obtain from the pool?

grncdr commented 11 years ago

any-db uses this postgres driver under the hood, check out the documentation for ConnectionPool.query, it should work for what you're doing.

btakita commented 11 years ago

It seems like 20 is too high for the connection pool. The errors go away when I set the size to 15 or less.

brianc commented 11 years ago

is it possible you have something else open to your database? Anyway you could track and see how many connections are open to it from your node process? I don't think the pool will take out more than pg.defaults.poolSize connections.

also, yes @grncdr any-db is the way to go

btakita commented 11 years ago

Yeah, my max_connections was set to 20. I guess that's the default?

mcavage commented 11 years ago

Hi Brian,

I'm way late to this thread but FWIW, I (Joyent) wrote my own pool over the top of node-postgres as I found a lot of issues with it (the ones you mentioned, plus managing database failovers).

We run an H/A postgres topology managed by ZooKeeper, and have to have apps automatically follow the topology when that changes - the built in pooling was basically untenable to do that with (in particular, if postgres crashes, you end up getting "black holed" on connections, so there's health checking, connect timeouts, query timeouts, etc., etc. - basically a mess of dealing with all the edge cases).

I don't want to be presumptuous, but if it's useful I think we'd be happy to just give you that code - it's ~500 LoC, on top of https://github.com/mcavage/node-pooling (which exists primarily b/c node-pool doesn't trap errors/timeouts from objects). I don't know if you'd want it as is, but if it's useful to look at, let me know.

m

brianc commented 11 years ago

@mcavage I absolutely would love to see that code. Maybe we could spin it out into a much better pooling library specific for postgres. :clap: :dancer:

lalitkapoor commented 11 years ago

I'd be interested in seeing this happen as well.

Cheers, L

Lalit Kapoor goodybag.com (http://www.goodybag.com)

On Wednesday, January 16, 2013 at 3:42 PM, Brian C wrote:

@mcavage (https://github.com/mcavage) I absolutely would love to see that code. Maybe we could spin it out into a much better pooling library specific for postgres.

— Reply to this email directly or view it on GitHub (https://github.com/brianc/node-postgres/issues/227#issuecomment-12341811).

mcavage commented 11 years ago

Okie dokie - here it is: https://gist.github.com/62c58046255f7ac181a6 - take a look at your leisure and we can figure out how to move this forward (I'm fine with changing anything there). There's no license in there, but let's just call it MIT :)

nicholasf commented 11 years ago

Yes, we're trying to solve this problem too.

Vanuan commented 11 years ago

So, how to fix err:error: sorry, too many clients already error? Just add client.end() when the connection is not needed? Or probably, I should create a long-lived connection per user?

grncdr commented 11 years ago

@vanuan use any-db, and ConnectionPool.query. Like so:

var pool = require('any-db').createPool('postgres://user:pass@localhost/databasename', {min: 2, max: 10})

// make a query
pool.query("SELECT * FROM users WHERE id = $1", [userId], function (err, res) {
  // do something with results.
})

This will never use more than 10 simultaneous connections, and you don't have to worry about freeing them.

This is the correct approach 99% of the time, the only exception is if you need to perform multiple queries in a single transaction, in which case you need to hang on to a single connection for the duration of the transaction, which is what ConnectionPool.begin and Transaction objects are for.

pool.begin(function (tx) {
  tx.query(...)
  tx.query(...)
  tx.commit(function (err) {
    // if err == null then the transaction committed successfully
  })
})

There's more detailed API descriptions for all of the any-db interfaces (they're the same as what pq exposes) at the above links. If you run into stuff you don't understand open issues requesting clarification on any-db.

TeeTeeHaa commented 11 years ago

After a colleague and me both independently stumbled over the "dead clients" bug of the client pool I would really appreciate a reference to this issue in the official documentation. It could save time of other programmers...

Of course I'd appreciate a bug free implementation of the client pool even more.

booo commented 11 years ago

Feel free to add documentation at the appropriate places...

TeeTeeHaa commented 11 years ago

Did so, hopefully at the right place and with a neutral wording.

brianc commented 11 years ago

Thanks @TeeTeeHaa. I didn't get a chance to work on this over the weekend, but this is the next thing I'm going to try & fix.

DavidIAm commented 11 years ago

I have additionally modified the example to show code that will illustrate that you do need to take additional action if you don't actually run a query on a connection.

brianc commented 11 years ago

closed with pull request #274

sjadhav-rpx commented 7 years ago

.