card323 / redis

Automatically exported from code.google.com/p/redis
0 stars 0 forks source link

Support for multiple POP/PUSH in lists #623

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Hi,

for some prototype I am  preparing with Redis I have found out that it could be 
useful to add support for a "multi pop" and "multi push" style commands.

Obviously it is possible to implement this using WATCH / MULTI / EXEC. The 
problem arises when the concurrence is high under the same key (which could 
happen).

Redis ends up sending lots of commands that will never be successfully executed 
and the performance does degrade quite a lot.

Thanks,

Original issue reported on code.google.com by naranj...@gmail.com on 4 Aug 2011 at 8:14

GoogleCodeExporter commented 8 years ago
Multi-push and multi-pop don't need to use WATCH. Pushing multiple elements 
onto a list can be done with a simple MULTI/EXEC block with a number of 
LPUSH/RPUSH commands, or using Redis 2.4 with a single LPUSH call with a 
variable number of arguments equal to the elements you want to push. Popping 
multiple elements is done in the same way, there is no need to use WATCH. 
Another way to pop a number of elements is by using LRANGE and LTRIM. For 
instance: MULTI; LRANGE list 0 9; LTRIM list 10 -1; EXEC; to pop the first 10 
elements off of a list.

Original comment by pcnoordh...@gmail.com on 4 Aug 2011 at 8:23

GoogleCodeExporter commented 8 years ago
Hi,

Yes, but what happens if you want it to fail if there aren't enough values?

Let's imagine a situation where you have a pool of objects and you clients want 
to retrieve a variable number of them. And you want them to fail if there 
aren't enough.

The only way I can see to implement this right now is using WATCH (if there are 
another lust let me know). I already tried both alternatives (LRANGE + LTRIM 
and LPOP), but I cannot figure out how to make it fail if for some reason 
someone modifies the data (which as I said would affect to the number of 
initial users).

Thanks,

Original comment by naranj...@gmail.com on 4 Aug 2011 at 9:59

GoogleCodeExporter commented 8 years ago
For the LPUSH you're right, I was just looking to an older version.

For the LPOP, obviously there are always other approaches that can be used, for 
example ignoring the WATCH and matching the number of returned values and if it 
is lower than the one we want create them again. Anyway that's just a 
walk-around and implies certain risks.

Using that we would not punish the latency of requests which are not likely to 
fail (only the last ones should be at risk of failing).

Thanks a lot,

Original comment by naranj...@gmail.com on 4 Aug 2011 at 10:11

GoogleCodeExporter commented 8 years ago
In that case the problem is very different. This is indeed only possible with 
WATCH, but will not be added as a native command because it is too specific. 
You can think of how convoluted the command space would become if this 
conditional operations were to be added to the command space. This is exactly 
the reason for adding the Lua scripting feature for versions post-2.4. To solve 
this problem you can either use WATCH with versions up to and including 2.4, 
use Lua scripting on the unstable branch, or (the easiest of all) don't care if 
you receive fewer elements than you requested.

Can you tell a little bit more about your use case and why you want the 
operation to fail when there are fewer elements than you want? If you want to 
pop a fixed number of elements and the list can contain less elements you 
probably push them individually. To solve this at insert-time you can do 
something along the lines of: use two lists, where you insert individual 
members on the first, check the length, and execute a fixed number of RPOPLPUSH 
operations if the length exceeds the number of elements you want to pop per 
time. This moves the WATCH-contention problem to insert-time, but makes sure 
the process that pops either pops the fixed number of elements, or nothing at 
all.

Cheers,
Pieter

Original comment by pcnoordh...@gmail.com on 4 Aug 2011 at 10:20

GoogleCodeExporter commented 8 years ago
Hi Pieter,

wow, 1st of all thanks for the quick reply! Sorry for the poor explanation on 
the beginning which created all that confusion. I suggested that command 
because I thought I could be a common case (maybe could be a common case if we 
omit the failure when the number of elements is not enough).

The algorithm I am implementing is something like:
1. Create all the objects of the pool.
2. Periodically and in a random time recover a random number of them. The 
number must be always be fulfilled.

Elements are only created at "initialization time".

I have created a quick test omitting the WATCH but I guess I did something 
wrong because I am getting a better time but not impressively better (it might 
be related to the fact that I am sending lots of concurrent operations, 50 
clients without a wait, and since Redis uses an event loop those will stuck 
until each one is done).

I am not completely sure about going for Lua due to the fact that it is not yet 
released, but I will take a look at it, because doing all that stuff in 1 
instruction would be just what I am looking for (1st it would be atomic, it 
would consume much less bandwidth, lock times would be smaller between nodes, 
and so on).

Another alternative I thought of (but not yet tested) was creating a lock and 
locking before recovering elements from the pool.

Thanks,

Original comment by naranj...@gmail.com on 4 Aug 2011 at 10:44

GoogleCodeExporter commented 8 years ago
Forget my comment about not getting much better results... I made some foolish 
mistake in something I changed.

Original comment by naranj...@gmail.com on 4 Aug 2011 at 11:09

GoogleCodeExporter commented 8 years ago
Yes, obviously it was my fault... the times without WATCH instruction and 
performing manual DISCARD (means PUSH of what I TRIMMed) are 400 times 
better... and I assume they would be even better going for LUA (less bandwidth 
for example).

Original comment by naranj...@gmail.com on 4 Aug 2011 at 11:23