Lachim / redis

Automatically exported from code.google.com/p/redis
2 stars 0 forks source link

Redis Slave leaking memory #568

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What version of Redis you are using, in what kind of Operating System?
redis 2.2.2, Redhat 4.1.2, Linux

What is the problem you are experiencing?
Redis slave leaking memory over time. Using 15G user memory for a 5G DB.

What steps will reproduce the problem?
Unsure.

Do you have an INFO output? Please past it here.
...
redis_version:2.2.2
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:2733
uptime_in_seconds:5964890
uptime_in_days:69
lru_clock:634964
used_cpu_sys:72340.41
used_cpu_user:15656.33
used_cpu_sys_childrens:0.00
used_cpu_user_childrens:0.00
connected_clients:302
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:5322895096
used_memory_human:4.96G
used_memory_rss:16309473280
mem_fragmentation_ratio:3.06
use_tcmalloc:0
loading:0
aof_enabled:0
changes_since_last_save:96362382
bgsave_in_progress:0
last_save_time:1300618996
bgrewriteaof_in_progress:0
total_connections_received:84827
total_commands_processed:571870493
expired_keys:0
evicted_keys:0
keyspace_hits:450245934
keyspace_misses:43104366
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:1
role:slave
...

If it is a crash, can you please paste the stack trace that you can find in
the log file or on standard output? This is really useful for us!

Please provide any additional information below.
ps auxh | grep redis show the following

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      2733  1.4 44.4 16099036 15927312 ?   Ss   Mar20 1466:36 
/usr/local/bin/redis-server /deploy/redis_conf/redis5.conf
root      3369  1.9 38.1 13812116 13668436 ?   Ss   Mar20 1930:06 
/usr/local/bin/redis-server /deploy/redis_conf/redis4.conf

The second instance using redis4.conf is using 13GB user memory for 5GB 
database.

These slaves have been up for a long time, as can be seen by the uptime in the 
info above.

Please help.

Thanks.

Original issue reported on code.google.com by gauravk...@gmail.com on 28 May 2011 at 12:03

GoogleCodeExporter commented 8 years ago
Read thru a few discussions and issues and thought more information is needed 
for this issue.

Some more information:

On the master for the same data the used_memory_rss is less than 7G

used_memory_rss:70872650752
mem_fragmentation_ratio:1.16

On the slave: 
used_memory:5322895096
used_memory_human:4.96G
used_memory_rss:16309473280
mem_fragmentation_ratio:3.06

We had to do SLAVEOF NO ONE and SLAVEOF <HOST> <PORT> about 3 times due to 
replication delays on a flaky network.

It would be good to look at this problem as I think it is genuinely a memory 
leak v/s "it's just how it works".

Thanks,
Gaurav.

Original comment by gauravk...@gmail.com on 3 Jun 2011 at 9:37

GoogleCodeExporter commented 8 years ago
It is not a memory leak since Redis tracks all allocations, and its view of the 
world is that only 4.96G is allocated. Rather, at some point in time it needed 
way more memory which caused the allocator to increase the heap size, do a 
couple of allocations there, which in turn cause the heap to remain fixed at 
that high limit. Instead of using the default allocator, libc malloc, you could 
try running Redis configured to use jemalloc to see if this alleviates this 
problem. You can find this branch here: 
https://github.com/antirez/redis/commits/2.2-jemalloc-static . You can compile 
it using "USE_JEMALLOC=yes make". Make sure jemalloc is used by checking the 
"mem_allocator" field in the INFO output. There have been reports of jemalloc 
greatly reducing the memory fragmention, and it could be a solution for the 
problem you're seeing here.

Original comment by pcnoordh...@gmail.com on 3 Jun 2011 at 9:46