redis / go-redis

Redis Go client
https://redis.uptrace.dev
BSD 2-Clause "Simplified" License
19.65k stars 2.32k forks source link

Proposal: Optionally allow for pubsub connections to be tracked in pool #2867

Open vtermanis opened 6 months ago

vtermanis commented 6 months ago

(Let me know if improvement/feature suggestions are better placed to be in discussions - I wasn't sure. Note: There is also a proposed implementation linked within.)

Discussed version: v9.4.0

Dear go-redis team,

We're using your client library and Redis for fan-in (RPUSH + BLPOP) and fan-out (PUBLISH + PSUBSCRIBE) operations. The listening ends of these operations can be long-running and the total number of fan-in & fan-out listeners can vary.

For the short/non-blocking operations and BLPOP the pool is ideal: We set PoolSize to ensure that we limit the total number of connections (so that among multiple services we're strictly below Redis' own maxclients limit). With PUBSCRIBE however ensuring a ceiling of total Redis connections is harder:

  1. We can set MaxActiveConns in addition to PoolSize, but the effect this has appears to be lacking:

    • Whilst there are fewer than MaxActiveConns non-pubsub connections, one can create an indefinite number of pubsub connections (via newConn())
    • When MaxActiveConns has been reached, one can no longer create any new connections
  2. We can use our own semaphore to restrict the total number of pubsub connections (a level above go-redis) but this doesn't work well for obvious reasons:

    • The pool keeps idles connections for re-use (MaxIdleConns). Even if this is still configured to be a subset of PoolSize, this still means that the total number of available connections cannot be shared between pubsub & non-pubsub.
    • One can instead set MaxIdleConns to zero but that of course has a signficant performance hit since the pool then is only used as a semaphore and each new request requires a new connection to be created.
  3. The Dialer (and other hooks) do not provide enough context to allow for external pooling (especially since the internal logic around whether to keep a connection alive or not is not visible there)


I appreciate that the point of not having pubsub use connections from the pool is because those connections have to be closed after use. I believe it would however be really useful to be able to:

Would you be willing to consider an optional (off by default) approach which shares the pool as described above? In essence this could mean e.g.:

  1. Expose a new option (e.g. PubsubFromPool) which is off by default. When set, the following points apply
  2. Connections for pubsub are retrieved via pool.Get() (as with other operations)
  3. Finished pubsub connections are released via pool.Remove() (i.e. not returned)
  4. Does not apply to cluster client type (should it?)

The trade-offs would be (up to the user to choose), with this option enabled:

If you were interest, we'd be willing to contribute such an option (with the addition of some tests, of course).