brianc / node-pg-pool

A connection pool for node-postgres
MIT License
180 stars 64 forks source link

min connections support #135

Closed djcuvcuv closed 4 years ago

djcuvcuv commented 4 years ago

Thanks for your help and attention on this. I've noticed that there doesn't appear to be a way to set a min number of pool connections; only a max. In my case, when my app receives a spike in incoming requests after being idle for a while (as is very common), app performance takes a significant hit while it spins up pool connections.

Is there no way to set a min boundary so that some set of pool connections are always kept open? Other than to never terminate all of the max # of connections by setting the idleConnectionTimeout value to some very large (or 0?), I can't think of another way. But exploiting the idle timeout would not be an ideal solution in any case.

Thanks again! -Chris

gajus commented 4 years ago

For a bit of context, there used to min setting https://github.com/brianc/node-pg-pool/issues/104

gajus commented 4 years ago

You could do something like this:

const createIdleConnection = (log, pool) => {
  pool
    .connect()
    .then((connection) => {
      return connection.release();
    })
    .catch((error) => {
      log.error({
        error: serializeError(error),
      }, 'connection could not be added to the pool');
    });
};

const managePoolSize = (log, pool, minimumPoolSize: number) => {
  let connectionCount = 0;

  pool.on('connect', () => {
    connectionCount++;
  });

  pool.on('remove', () => {
    connectionCount--;

    // eslint-disable-next-line no-use-before-define
    provision();
  });

  const provision = () => {
    let missingConnectionCount = Math.max(0, minimumPoolSize - connectionCount);

    while (missingConnectionCount-- > 0) {
      createIdleConnection(log, pool);
    }
  };

  provision();

  const timeoutId = setInterval(provision, 250);

  // $FlowFixMe
  timeoutId.unref();
};
gajus commented 4 years ago

For what it is worth, this is how I ended up implementing this logic.

https://github.com/gajus/slonik/commit/151bc84c5931023f77663b25baf22fc9a187a44e

gajus commented 4 years ago

That implementation was flawed and later removed. At the end, I found little value in proactively pooling connections.

djcuvcuv commented 4 years ago

@gajus Thanks a lot for your help here. I think the logic could work in my app well. However, the good news is that my traffic patterns in the live production app are such that by simply increasing the idle timeout to something in the order of 15-20s (i.e. 2x default) the pool seems to always have a few connections available for new requests. I think this issue can be closed given this fact in conjunction with your above alternative/explicit solutions.

Thanks again!