Open GoogleCodeExporter opened 9 years ago
I think this would be a very useful feature. I'd be willing to work on it, if
it's considered an acceptable addition.
Original comment by dave.pet...@gmail.com
on 8 Sep 2011 at 7:09
I'd definitely appreciate it!
Original comment by jwillp@gmail.com
on 9 Sep 2011 at 12:07
I've added a new feature that lets you specify a limit on the number of pending
objects in the client's "reply" linked list (pending outbound messages). It
adds a new server level config parameter "maxclientqueue", with a default of 0
(disabled).
When this setting is enabled and a client is too slow to read replies fast
enough to keep up with the server, then the server will drop the client's
connection when it has built up the maximum number of messages on the server
side. It's not perfect, because ideally you would want to limit the client
based on the number of bytes that have been queued up. But this setting is
intended to be a safety-measure (like a shearing-pin). Actually counting up
the enqueued bytes and maintaining that value seems like overkill when we
already have an object count in the linked list structure.
When a client exceeds this limit, the server also logs a warning message to
indicate that it has dropped a client due to this overflow. That provides a
positive signal if this setting is used and is set too low, or if clients are
frequently too slow to keep up. (There is also a low frequency check (every
~30 sec or so) for any clients that have overflowed, but haven't yet been
dropped due to another write, as a cleanup pass.)
My github branch for this patch is here:
https://github.com/willp/redis/tree/willp_issue525_memory
And the diff is attached as a patch as well.
Original comment by jwillp@gmail.com
on 21 Dec 2011 at 4:16
Attachments:
Thanks for doing this!
I think your solution is very reasonable, and probably the simplest
out of the alternatives.
Original comment by dave.pet...@gmail.com
on 21 Dec 2011 at 4:23
Original issue reported on code.google.com by
jwillp@gmail.com
on 17 Apr 2011 at 6:12