Lachim / redis

Automatically exported from code.google.com/p/redis
2 stars 0 forks source link

segmentation fault - 2.2.4 - MacOS #533

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What version of Redis you are using, in what kind of Operating System?

Redis 2.2.4 built via homebrew
MacOS 10.6.7

What is the problem you are experiencing?

Segmentation fault - probably in BLPOP

What steps will reproduce the problem?

I have a python script that caused the issue.  It doesn't happen consistently.  
The basic pattern I'm doing is:

1. Start 4 threads that are doing BLPOP calls to a single key
2. Start 4 threads that are doing RPUSH calls to the same key
3. Have the RPUSH threads push a total of 100000 items onto that key list
4. Wait in the main thread until the consumer threads successfully BLPOP all 
100000 values

The segfault appeared to occur in step 3 -- the producer threads were actively 
writing messages while the consumers were reading.  The producers had not 
finished writing yet.

If it is a crash, can you please paste the stack trace that you can find in
the log file or on standard output? This is really useful for us!

[77173] 22 Apr 10:03:11 - 8 clients connected (0 slaves), 1004080 bytes in use
[77173] 22 Apr 10:03:16 - DB 0: 8 keys (0 volatile) in 16 slots HT.
[77173] 22 Apr 10:03:16 - 8 clients connected (0 slaves), 1004160 bytes in use
[77173] 22 Apr 10:03:18 # === ASSERTION FAILED ===
[77173] 22 Apr 10:03:18 # ==> t_list.c:820 'ln != listFirst(clients)' is not 
true
[77173] 22 Apr 10:03:18 # (forcing SIGSEGV in order to print the stack trace)
[77173] 22 Apr 10:03:18 # ======= Ooops! Redis 2.2.4 got signal: -11- =======
[77173] 22 Apr 10:03:18 # redis_version:2.2.4
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:kqueue
process_id:77173
uptime_in_seconds:11154
uptime_in_days:0
lru_clock:325755
used_cpu_sys:33.59
used_cpu_user:55.48
used_cpu_sys_childrens:0.69
used_cpu_user_childrens:0.41
connected_clients:8
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:1005552
used_memory_human:981.98K
used_memory_rss:33865728
mem_fragmentation_ratio:33.68
use_tcmalloc:0
loading:0
aof_enabled:1
changes_since_last_save:28973
bgsave_in_progress:0
last_save_time:1303491747
bgrewriteaof_in_progress:0
total_connections_received:474
total_commands_processed:1629166
expired_keys:0
evicted_keys:0
keyspace_hits:1542384
keyspace_misses:450008
hash_max_zipmap_entries:512
hash_max_zipmap_value:64
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master
allocation_stats:6=1,7=1,8=345218,9=1901649,10=577477,11=7651,12=108690,13=6731,
14=1266522,15=593,16=9
[77173] 22 Apr 10:03:18 # 1   redis-server                        
0x000000010002b3df 0x0 + 4295144415
[77173] 22 Apr 10:03:18 # 2   ???                                 
0x00007fff00000000 0x0 + 140733193388032
[77173] 22 Apr 10:03:18 # 3   redis-server                        
0x000000010001b822 0x0 + 4295079970
[77173] 22 Apr 10:03:18 # 4   redis-server                        
0x000000010001bfb4 0x0 + 4295081908
[77173] 22 Apr 10:03:18 # 5   redis-server                        
0x0000000100007983 0x0 + 4294998403
[77173] 22 Apr 10:03:18 # 6   redis-server                        
0x00000001000112b8 0x0 + 4295037624
[77173] 22 Apr 10:03:18 # 7   redis-server                        
0x0000000100011390 0x0 + 4295037840
[77173] 22 Apr 10:03:18 # 8   redis-server                        
0x0000000100001ab1 0x0 + 4294974129
[77173] 22 Apr 10:03:18 # 9   redis-server                        
0x0000000100001dbe 0x0 + 4294974910
[77173] 22 Apr 10:03:18 # 10  redis-server                        
0x0000000100007598 0x0 + 4294997400
[77173] 22 Apr 10:03:18 # 11  redis-server                        
0x0000000100000b04 0x0 + 4294970116
[77173] 22 Apr 10:03:18 # 12  ???                                 
0x0000000000000002 0x0 + 2
Segmentation fault

Please provide any additional information below.

I was using the Python redis client on the same MacOS machine.

Original issue reported on code.google.com by jamespco...@gmail.com on 22 Apr 2011 at 5:24

GoogleCodeExporter commented 8 years ago
Thanks for the report, investigating.

Original comment by pcnoordh...@gmail.com on 22 Apr 2011 at 6:52

GoogleCodeExporter commented 8 years ago
Can you provide the script you're using to trigger the segfault?

Original comment by pcnoordh...@gmail.com on 22 Apr 2011 at 7:17

GoogleCodeExporter commented 8 years ago
Sure thing.  Apologies that it's not a more simple script.  I'm doing an eval 
of Redis as a potential queueing solution, so I wrote some scripts that I'm 
using to test RabbitMQ, ActiveMQ, HornetQ, and Redis.  

BTW, Redis is the front-runner.  If we can get this issue sorted out then we're 
moving forward with it.  Thanks for looking into this issue so quickly!

To run:

pip install redis
pip install stomp.py   (just so imports don't blow up)
export BROKER_TYPE=redis
performance_test.py 4 4 100000

performance_test.py accepts
 - # of producers
 - # of consumers
 - # of messages to queue

4 4 100000 were the inputs I used when I got the segfault

Original comment by jamespco...@gmail.com on 22 Apr 2011 at 8:01

Attachments:

GoogleCodeExporter commented 8 years ago
Thanks, I was able to reproduce the fault within 5 runs. Now on to solving it 
;-).

Original comment by pcnoordh...@gmail.com on 22 Apr 2011 at 8:12

GoogleCodeExporter commented 8 years ago
Fantastic.  I figure with something like this, getting a consistent repro is 
one of the harder parts.  Thanks again for your help.

Original comment by jamespco...@gmail.com on 22 Apr 2011 at 8:14

GoogleCodeExporter commented 8 years ago
Have you had any luck tracking the cause of this one down?  Thanks very much.

Original comment by jamespco...@gmail.com on 13 May 2011 at 8:56

GoogleCodeExporter commented 8 years ago
Hello James, should be ok now in the 2.2 branch on github. Please can you 
verify? Thank you.

Original comment by anti...@gmail.com on 13 May 2011 at 9:05

GoogleCodeExporter commented 8 years ago
p.s. this is fixed into 2.2.7 as well

Original comment by anti...@gmail.com on 13 May 2011 at 9:08

GoogleCodeExporter commented 8 years ago
fantastic!  thank you very much!

Original comment by jamespco...@gmail.com on 13 May 2011 at 9:56