Closed GoogleCodeExporter closed 9 years ago
Hello, my first guess is that the hash key had an EXPIRE set. This is normal
behavior as write operations against
expiring keys will start deleting the old key. I'm right? :)
Cheers,
Salvatore
Original comment by anti...@gmail.com
on 29 Apr 2010 at 7:07
the expire had been set at 24 hours from creation time.
and the hash had just been created.
I had preset it as that is as long as I wanted the key to remain.
it seems counterintuitive. I would expect that setting the expire wouldn't
change
how other commands on those objects change things. and then once the time is
up it
would disappear.
Original comment by pete%rum...@gtempaccount.com
on 29 Apr 2010 at 11:48
I second the bug report.
redis> incr key
(integer) 7
redis> expire key 10000
(integer) 1
redis> get key
"7"
redis> incr key
(integer) 1
This is very counter intuitive if not simply not correct. The fact that a key
has an expire set shouldn't change
the behavior of other commands.
Original comment by dialt...@gmail.com
on 30 Apr 2010 at 12:49
Hey guys, read the EXPIRE man page for a very good reason why this is the way
it is ;)
It is a fundamental property in order to ensure the AOF and replication can be
mixed with expires, otherwise you
start getting inconsistencies everywhere.
Original comment by anti...@gmail.com
on 30 Apr 2010 at 7:02
I don't see how this can be related... An INCR is equivalent to a GET and SET
+1 except atomic. How is the
value associated with the key related in any way to replication or the expire?
The manpage states:
"When the key is set to a new value using the SET command, the INCR command or
any other command that
modify the value stored at key the timeout is removed from the key and the key
becomes non volatile."
Which is perfectly fine, it doesn't say that it needs to reset the field though
and I don't see why it should. If a
SET is allowed to set a new value on a field with an expire then also an INCR
should be allowed to increment a
value with an expire.
Original comment by dialt...@gmail.com
on 30 Apr 2010 at 7:52
Hey Dialtone, that's a practical example:
HSET foo field 100
EXPIRE foo 1
... then the client waits 10 seconds ...
HINCRBY foo field 1
Without the rule of delete-on-write if it's an expire we would have a field
with value 1, as the value expired.
Now let's run the same sequence from the AOF file, where there are no pauses:
HSET foo field 100
EXPIRE foo 1
HINCRBY foo field 1
Final result: "field" is set to 101.
The same is true the other way around. The client may chat without delays so
the field will be 101, but in the
replication link the connection will be slow for network problems for a few
seconds, so the slave will get a
field with value of 1.
Time dependent behavior is bad...
Cheers,
Salvatore
Original comment by anti...@gmail.com
on 30 Apr 2010 at 8:40
I agree that consistency is for the best. I think this made a lot of sense
with the
smaller data types. most of the time when you change them your replacing them
so it
is not a big deal. later when you get to the hashes people will treat them
more
like objects and want to keep their contents while changing parts.
in my case I know that I only want the object to live 24 hours. so the
simplest case
for me is to set that after I create the hash. I need to change it as well
to keep
some current counts. otherwise I need a background job to go around and clear
things out or split out the changing fields out of the hash so that they don't
mess
it up.
I guess I can maintain a zset with my own expire dates and just have a job that
deletes them out.
thoughts?
-pete
Original comment by pete%rum...@gtempaccount.com
on 1 May 2010 at 12:56
This was definitely fixed in Redis master (first stable release to have this
fix will be 2.2). In Redis master write-against-expires are supported without
issues and with perfect consistency between master and slaves.
Original comment by anti...@gmail.com
on 30 Aug 2010 at 11:01
Original issue reported on code.google.com by
pete%rum...@gtempaccount.com
on 29 Apr 2010 at 3:33