djveremix / redis

Automatically exported from code.google.com/p/redis
0 stars 0 forks source link

sinterstore with expire drops keys #253

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
I ran the following commands (via redis-cli on an up-to-date copy 
of master, but also get similar results on try.redis-db.com), with 
the following effects:

> sadd foo 1
(integer) 1
> expire foo 10000
(integer) 1
> sinterstore out foo
(integer) 0
> smembers foo
(empty list or set)
> smembers out
(empty list or set)

(with no significant delay between the commands).

I would have expected the sinterstore command to return "(integer) 
1", and the two smembers commands to return "1".  Without the 
expire, this is what happens.

I could understand the sinterstore resulting in "out" being either 
empty, or having a TTL set to the same as sinterstore (and, in 
general, being set to the earliest TTL of the sets used in making 
the intersection), but I can't see how "foo" becoming unset as a 
result of the sinterstore command makes sense, so I'm guessing it's 
a bug.

Original issue reported on code.google.com by boulton.rj@gmail.com on 1 Jun 2010 at 11:36

GoogleCodeExporter commented 9 years ago
This is in the docs.  See the FAQ on 
http://code.google.com/p/redis/wiki/ExpireCommand

Original comment by josiah.c...@gmail.com on 4 Jun 2010 at 3:31

GoogleCodeExporter commented 9 years ago
I'd read the docs, but I don't think they're clear on this.  Two things:

- "sinterstore out foo" shouldn't be a write operation on foo (it should only 
need to 
read it), but causes it to be cleared.  Unless I'm missing something, the docs 
only 
say "basically a volatile key is destroyed when it is target of a write 
operation".

- The docs don't cover what happens when a key with an expire set is used as 
source 
data for an operation which writes a new key.  It looks like all source keys 
(with 
expire set) involved in the operation get cleared, and then the operation is 
performed, but this isn't specified (and was quite surprising to me).

Original comment by boulton.rj@gmail.com on 4 Jun 2010 at 6:40

GoogleCodeExporter commented 9 years ago
You're observation is right, in that volatile keys being used as a source for 
composite operations will be cleared 
on read. This is done because of concurrency issues in replication. Imagine a 
key that is about to expire and the 
composite operation is run against it. On a slave node, this key might already 
be expired, which leaves you with a 
desync in your dataset. I'll check out the documentation on EXPIRE and see if 
there is anything that needs to be 
added. Because this behavior is expected, I'm closing the issue.

Original comment by pcnoordh...@gmail.com on 4 Jun 2010 at 7:34

GoogleCodeExporter commented 9 years ago
Thanks for the explanation - that makes sense to me.

I'd suggest modifying the ExpireCommand wiki page slightly to indicate this.  
Perhaps change the "Restrictions with write operations against volatile keys" 
section's first 
paragraph from saying "basically a volatile key is destroyed when it is target 
of a 
write operation." to "basically a volatile key is destroyed when it is either 
the 
target or source of a write operation.", and add a section saying:

"Even when the volatile key is not modified as part of a write operation, if it 
is 
read in a composite write operation (such as SINTERSTORE) it will be cleared at 
the 
start of the operation.  This is done to avoid concurrency issues in 
replication.  
Imagine a key that is about to expire and the composite operation is run 
against it. 
On a slave node, this key might already be expired, which leaves you with a 
desync in your dataset."

Original comment by boulton.rj@gmail.com on 4 Jun 2010 at 2:44

GoogleCodeExporter commented 9 years ago
Just updated the documentation for EXPIRE with the above note.

Original comment by sove...@gmail.com on 4 Jun 2010 at 3:18