mediocregopher / radix.v2

Redis client for Go
http://godoc.org/github.com/mediocregopher/radix.v2
MIT License
433 stars 92 forks source link

MaxConns for Pool #4

Closed elithrar closed 8 years ago

elithrar commented 9 years ago

Pool currently provides a size parameter that sets an upper limit on the number of idle connections, but doesn't appear to set upper bounds on the number of connections that might be dialled under pool.Get.

Redis will bounce connections above its connection limit, although from what I can discern p.df(p.Network, p.Addr) won't timeout (by default) and therefore will block until it connects.

It might be worthwhile instead blocking locally until the available connections in the pool are < max rather than attempting to open new ones over the limit.

mediocregopher commented 9 years ago

I'm sorry it's taken so long to respond, for some reason github decided I shouldn't receive notifications for issues on the repo >_<

As to your issue: to my knowledge there's no way to block on a channel based on its current size. I'm not even certain the length of a channel could be checked in a thread-safe way without use of a lock or another channel.

One possible alternative would be to make it possible for Get to not open new connections but instead for it to check the channel, and loop if the channel is empty (with a small sleep). The problem here is that this doesn't handle the case of connections in other routines being closed for one reason or another and not being returned to the pool. If Get were to have this behavior there would need to be some mechanism by which dead connections are replaced, something which happens implicitly in the current setup. I don't believe Put should do the replacing since we don't want it to block on making a new connection, it would have to be a new goroutine spawned by Put. Now we're in the territory of a package spawning go-routines the user didn't necessarily expect it to, and all the tradeoffs which go with that.

I'm not necessarily opposed to the idea, but I don't believe it will be a straightforward fix by any means. Could you perhaps expand on why this behavior is valuable to you? I personally have never desired it nor have anyone I've worked with or talked to, so I'm curious where you're coming from.

elithrar commented 9 years ago

Yeah, I definitely understand the limitations of channels (as they stand) for this.

The major use-case here is when using hosted Redis instances (Heroku, Compose, Redis Labs) that set a limit on the number of concurrent connections you can make.

The current behavior—trying to create new connections above the bounds and then failing—isn't terrible, but I'm curious to see if there's appetite for an alternative. On Tue, 15 Sep 2015 at 3:24 am Brian Picciano notifications@github.com wrote:

I'm sorry it's taken so long to respond, for some reason github decided I shouldn't receive notifications for issues on the repo >_<

As to your issue: to my knowledge there's no way to block on a channel based on its current size. I'm not even certain the length of a channel could be checked in a thread-safe way without use of a lock or another channel.

One possible alternative would be to make it possible for Get to not open new connections but instead for it to check the channel, and loop if the channel is empty (with a small sleep). The problem here is that this doesn't handle the case of connections in other routines being closed for one reason or another and not being returned to the pool. If Get were to have this behavior there would need to be some mechanism by which dead connections are replaced, something which happens implicitly in the current setup. I don't believe Put should do the replacing since we don't want it to block on making a new connection, it would have to be a new goroutine spawned by Put. Now we're in the territory of a package spawning go-routines the user didn't necessarily expect it to, and all the tradeoffs which go with that.

I'm not necessarily opposed to the idea, but I don't believe it will be a straightforward fix by any means. Could you perhaps expand on why this behavior is valuable to you? I personally have never desired it nor have anyone I've worked with or talked to, so I'm curious where you're coming from.

— Reply to this email directly or view it on GitHub https://github.com/mediocregopher/radix.v2/issues/4#issuecomment-140182558 .

mediocregopher commented 8 years ago

I think for that use-case even having a limit on the pool size is really just moving the goal-post. Either way there is some kind of limit on number of connections to redis that your are hitting and which is interrupting normal program flow in one way or another. I think for this case it would be better to investigate why you're hitting these limits at all, and trying to prevent it (either on the application side, or with redis cluster).

Sorry it's kind of a glib and non-helpful answer, just giving my perspective. As a stop-gap solution it would be pretty straightforward to write a wrapper around pool.Get which would detect the error which is returned when heroku hits the limit and simply sleep and try again later. So you would have the blocking behavior you want, and probably more control over it than you would get if it was baked in.

elithrar commented 8 years ago

Thanks—note that I'm not hitting the limit, and obviously the response would be to upgrade your plan (if it was a regular occurrence), but I'm also thinking defensively in this case.

The best approach is looking like catching the error and retrying 1-2 times before bailing out in full.

On Fri, Sep 18, 2015 at 1:25 AM Brian Picciano notifications@github.com wrote:

I think for that use-case even having a limit on the pool size is really just moving the goal-post. Either way there is some kind of limit on number of connections to redis that your are hitting and which is interrupting normal program flow in one way or another. I think for this case it would be better to investigate why you're hitting these limits at all, and trying to prevent it (either on the application side, or with redis cluster).

Sorry it's kind of a glib and non-helpful answer, just giving my perspective. As a stop-gap solution it would be pretty straightforward to write a wrapper around pool.Get which would detect the error which is returned when heroku hits the limit and simply sleep and try again later. So you would have the blocking behavior you want, and probably more control over it than you would get if it was baked in.

— Reply to this email directly or view it on GitHub https://github.com/mediocregopher/radix.v2/issues/4#issuecomment-141158683 .

mediocregopher commented 8 years ago

I'll think about if there might be a cleaner way to do this, but for now I think the wrapper is the way to go. I'm gonna go ahead and close this, feel free to reopen if you have any other ideas or further problems :)