We can't be sending really large mGets to memcached servers as it consumes a lot of memory in the read/response buffers and that stays consumed forever. This starts to really add up as each thread starts allocating for the large payload.
So instead of waisting lots of memory on the memcached servers that could better be used for item storage, we can just split up the mGets into more reasonable sizes of 1000 each.
Even 1000 is a pretty low in the grand scheme, as the issue really starts to show itself when there are like 50k or 200k key lookups. That many key lookups at once is pretty much always going to be buggy code anyway.
We can't be sending really large mGets to memcached servers as it consumes a lot of memory in the read/response buffers and that stays consumed forever. This starts to really add up as each thread starts allocating for the large payload.
So instead of waisting lots of memory on the memcached servers that could better be used for item storage, we can just split up the mGets into more reasonable sizes of 1000 each.
Even 1000 is a pretty low in the grand scheme, as the issue really starts to show itself when there are like 50k or 200k key lookups. That many key lookups at once is pretty much always going to be buggy code anyway.