djveremix / redis

Automatically exported from code.google.com/p/redis
0 stars 0 forks source link

hincrby is clearing out other hash elements #232

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. did a hset to create a hash through redis-rb
2. did an hincrby and it returned success
3. did a hgetall and the hash only has the incrby value

What is the expected output? What do you see instead?
redis> hgetall campaign:15
1. "units"
2. "1"
3. "cost"
4. "3"
5. "end_date"
6. "1272510000"
7. "start_date"
8. "1272503640"
9. "show_id"
10. "1"
11. "created_at"
12. "1272511602"
redis> hincrby campaign:15 units 1
(integer) 1
redis> hgetall campaign:15
1. "units"
2. "1"

should see 

1. "units"
2. "2"
3. "cost"
4. "3"
5. "end_date"
6. "1272510000"
7. "start_date"
8. "1272503640"
9. "show_id"
10. "1"
11. "created_at"
12. "1272511602"

What version of the product are you using? On what operating system?
commit 8ff6a48b99dd5e706f542be848a62beaf995229b
Author: antirez <antirez@metal.(none)>
Date:   Tue Apr 27 18:06:52 2010 +0200

snow leopard os

Please provide any additional information below.

I have tried to reproduce by creating the hash in redis-cli and haven't
been able to.

I did create the hash in redis-rb using redis.multi

I haven't had a chance to try to try the multi through redis-cli yet

also the hdel seems to delete the whole hash as well.

hrem is in the documentation but doesn't exist.

Original issue reported on code.google.com by pete%rum...@gtempaccount.com on 29 Apr 2010 at 3:33

GoogleCodeExporter commented 9 years ago
Hello, my first guess is that the hash key had an EXPIRE set. This is normal 
behavior as write operations against 
expiring keys will start deleting the old key. I'm right? :)

Cheers,
Salvatore

Original comment by anti...@gmail.com on 29 Apr 2010 at 7:07

GoogleCodeExporter commented 9 years ago
the expire had been set at 24 hours from creation time.

and the hash had just been created.

I had preset it as that is as long as I wanted the key to remain.   

it seems counterintuitive.  I would expect that setting the expire wouldn't 
change
how other commands on those objects change things.  and then once the time is 
up it
would disappear.

Original comment by pete%rum...@gtempaccount.com on 29 Apr 2010 at 11:48

GoogleCodeExporter commented 9 years ago
I second the bug report.

redis> incr key
(integer) 7
redis> expire key 10000
(integer) 1
redis> get key
"7"
redis> incr key
(integer) 1

This is very counter intuitive if not simply not correct. The fact that a key 
has an expire set shouldn't change 
the behavior of other commands.

Original comment by dialt...@gmail.com on 30 Apr 2010 at 12:49

GoogleCodeExporter commented 9 years ago
Hey guys, read the EXPIRE man page for a very good reason why this is the way 
it is ;)
It is a fundamental property in order to ensure the AOF and replication can be 
mixed with expires, otherwise you 
start getting inconsistencies everywhere.

Original comment by anti...@gmail.com on 30 Apr 2010 at 7:02

GoogleCodeExporter commented 9 years ago
I don't see how this can be related... An INCR is equivalent to a GET and SET 
+1 except atomic. How is the 
value associated with the key related in any way to replication or the expire?

The manpage states:

"When the key is set to a new value using the SET command, the INCR command or 
any other command that 
modify the value stored at key the timeout is removed from the key and the key 
becomes non volatile."

Which is perfectly fine, it doesn't say that it needs to reset the field though 
and I don't see why it should. If a 
SET is allowed to set a new value on a field with an expire then also an INCR 
should be allowed to increment a 
value with an expire.

Original comment by dialt...@gmail.com on 30 Apr 2010 at 7:52

GoogleCodeExporter commented 9 years ago
Hey Dialtone, that's a practical example:

HSET foo field 100
EXPIRE foo 1
... then the client waits 10 seconds ...
HINCRBY foo field 1

Without the rule of delete-on-write if it's an expire we would have a field 
with value 1, as the value expired.

Now let's run the same sequence from the AOF file, where there are no pauses:

HSET foo field 100
EXPIRE foo 1
HINCRBY foo field 1

Final result: "field" is set to 101.

The same is true the other way around. The client may chat without delays so 
the field will be 101, but in the 
replication link the connection will be slow for network problems for a few 
seconds, so the slave will get a 
field with value of 1.

Time dependent behavior is bad...

Cheers,
Salvatore

Original comment by anti...@gmail.com on 30 Apr 2010 at 8:40

GoogleCodeExporter commented 9 years ago
I agree that consistency is for the best.   I think this made a lot of sense 
with the
smaller data types.   most of the time when you change them your replacing them 
so it
is not a big deal.    later when you get to the hashes people will treat them 
more
like objects and want to keep their contents while changing parts.    

in my case I know that I only want the object to live 24 hours.  so the 
simplest case
for me is to set that after I create the hash.    I need to change it as well 
to keep
some current counts.    otherwise I need a background job to go around and clear
things out or split out the changing fields out of the hash so that they don't 
mess
it up.

I guess I can maintain a zset with my own expire dates and just have a job that
deletes them out.   

thoughts?

-pete

Original comment by pete%rum...@gtempaccount.com on 1 May 2010 at 12:56

GoogleCodeExporter commented 9 years ago
This was definitely fixed in Redis master (first stable release to have this 
fix will be 2.2). In Redis master write-against-expires are supported without 
issues and with perfect consistency between master and slaves.

Original comment by anti...@gmail.com on 30 Aug 2010 at 11:01