dsphper / redis

Automatically exported from code.google.com/p/redis
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

Redis greatly exceeds vm-max-memory, even taking keys into account #394

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Use redis with vm-enabled and a medium sized DB that doesn't fit entirely 
into memory
2. Use normally, under heavy read (no writes) load
3. Watch as resident memory climbs well over vm-max-memory

What is the expected output? What do you see instead?

I would expect the size of the DB once loaded into memory/pagefile from disk to 
stop growing eventually, but it continues to grow.

For example, my 425MB mixed-key .rdb file when loaded with the following config 
(giving only 384MB vm-max-memory) stops at 1.3 GB RSS.  At this point one would 
expect we've hit the 384MB limit on values, so the rest of the memory is 
book-keeping and keys.  Key memory usage shouldn't increase in size after this 
because I don't do any writes at all.

Now, if I use a client to iterate over every single key just calling TYPE on 
each one (to exercise the vm, paging values in and out) the redis-server is at 
3.2GB when it concludes.  Remember:  I've set vm-max-memory to be 384MB.

The same happens if we increase vm-max-memory, I just wanted to use a small 
value to more easy illustrate the point.  We've had redis-server in production 
with vm-max-memory set to 1.5GB use over 7GB of RSS.

What version of the product are you using? On what operating system?

2.0.3 64-bit on Ubuntu 10.04 64-bit. 12GB RAM, 

Please provide any additional information below.

Configuration:

activerehashing yes
appendfsync everysec
appendonly no
bind 10.8.50.186
daemonize no
databases 16
dbfilename dump.rdb
dir /home/brett/
glueoutputbuf yes
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
logfile redis.log
loglevel debug
pidfile redis.pid
port 60000
rdbcompression yes
save 300 10
save 60 10000
save 900 1
timeout 300
vm-enabled yes
vm-max-memory 402653184 # 384MB
vm-max-threads 4
vm-pages 134217728
vm-page-size 32
vm-swap-file /home/brett/redis.swap

`info' output after calling TYPE on each key serially.

redis_version:2.0.3
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:1389
uptime_in_seconds:434
uptime_in_days:0
connected_clients:2
connected_slaves:0
blocked_clients:0
used_memory:162978888
used_memory_human:155.43M
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1291259156
bgrewriteaof_in_progress:0
total_connections_received:2
total_commands_processed:134201
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:1
role:master
vm_conf_max_memory:402653184
vm_conf_page_size:32
vm_conf_pages:134217728
vm_stats_used_pages:13442775
vm_stats_swapped_objects:122962
vm_stats_swappin_count:134114
vm_stats_swappout_count:257076
vm_stats_io_newjobs_len:0
vm_stats_io_processing_len:0
vm_stats_io_processed_len:0
vm_stats_io_active_threads:0
vm_stats_blocked_clients:0
db0:keys=134200,expires=0

Original issue reported on code.google.com by bretthoe...@gmail.com on 2 Dec 2010 at 3:17

GoogleCodeExporter commented 8 years ago
I am experiencing the same issues with VM. Here are my observations:

1. the redis memory consumption reported by htop keeps growing steadily. In the 
last 24 hour it grew by almost 3GB on the master and about 500MB on the slave 
2. Right now, the only way to reduce redis memory consumption again is to do a 
redis shutdown and start it up again
3. htop reports 30% more memory than set with vm_conf_max_memory . I have tried 
several different values for vm_conf_max_memory, it constantly is 30% more
4. Tested with version 2.0.1 and 2.0.4 . Both did behave the same 

system: ubuntu lucid 10.04, 64bit, 2.6.32-25-server
redis info:

redis_version:2.0.1
redis_git_sha1:068eb3bf
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:20071
uptime_in_seconds:82787
uptime_in_days:0
connected_clients:97
connected_slaves:1
blocked_clients:0
used_memory:6442272320
used_memory_human:6.00G
changes_since_last_save:70815504
bgsave_in_progress:0
last_save_time:1291203749
bgrewriteaof_in_progress:0
total_connections_received:65150
total_commands_processed:274569814
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:1
role:master
vm_conf_max_memory:6442450944
vm_conf_page_size:8192
vm_conf_pages:50304000
vm_stats_used_pages:636877
vm_stats_swapped_objects:514780
vm_stats_swappin_count:352597
vm_stats_swappout_count:867520
vm_stats_io_newjobs_len:0
vm_stats_io_processing_len:0
vm_stats_io_processed_len:0
vm_stats_io_active_threads:0
vm_stats_blocked_clients:0
db2:keys=604877,expires=0
db3:keys=248,expires=0

Original comment by patrick....@gmail.com on 2 Dec 2010 at 9:47

GoogleCodeExporter commented 8 years ago
Hi, Issue 393 may be interesting for you

Original comment by t.br...@gmail.com on 2 Dec 2010 at 2:24

GoogleCodeExporter commented 8 years ago
Hello, there are a mix of issues here probably.

The first is: Redis is not able to really know the exact amount of memory used.
The RSS is not a good measure, as it will most of the times reflect the *max* 
memory used by the process in the recent times, so we can't use it for 
maxmemory nor vm-max-memory, as freeing a few objects will not have any effect 
on the RSS reporting.

So what Redis does to estimate the memory used is summing all the allocated 
pieces of memory from the point of view of malloc().
In most systems due to overheads and fragmentation this will result, more or 
less, in 30% - 50% more memory used. There is to do some math and reduce the 
vm-max-memory parameter accordingly.

Instead about the issue of vm-max-memory set to 350MB and the actual RSS being 
3 GB, this can be likely due to the fact that the *peak* memory usage was of 
3GB.

The RSS will not go down even after a flushall command. Even if actually most 
of this pages will be swapped out and will be reusable by other processes. Here 
the general rule is to set vm-max-memory to the peak memory.

So the question is, what operation had the effect of using more memory? 
Probably calling TYPE against all the keys in a loop will have the effect of 
loading everything in memory almost, as the server will not swap out things 
with the same speed. Not sure about this but you can check in a very simple 
way. 

Download redis-tools from github.com/antirez/redis-tools and use ./redis-stat 
and ./redis-stat vmstat in different terminals to see what peak of used memory 
you reach while running the script.

Cheers,
Salvatore

Original comment by anti...@gmail.com on 2 Dec 2010 at 2:35

GoogleCodeExporter commented 8 years ago
> as the server will not swap out things with the same speed

I think this was the part I didn't expect and don't understand.  Why wouldn't 
redis be able to tell that the value it needs to load won't fit under the 
current memory limit, and thus swap out objects to make room for it *before* 
loading the object?  It seems like without doing that the vm-max-memory 
parameter doesn't actually do much (as seen by Redis thinking it's using 
155.43M when it's holding 3.2GB worth from the OS).

My test using TYPE was just a simple way to exercise the VM, the same problem 
occurs for us under normal load (doing various operations on hashes, sets, 
zsets mostly).

When you say that the unused pages should page out, you're talking about OS 
swapping and not Redis-VM, right?  So the recommended way to run multiple 
redis-server instances on one high memory machine is to just let the OS swap 
handle everything?  At that point should I just disable Redis-VM altogether?

Original comment by bretthoe...@gmail.com on 2 Dec 2010 at 2:56

GoogleCodeExporter commented 8 years ago
bretthoerne I'm not sure if actually this is what is happening. You can test it 
with Redis load so we can investigate further if you already have an easy way 
to replicate the "TYPE" test. I need to check better the implementation but 
actually I did not expected Redis to use too much peak memory under normal 
conditions *unless* it was saving. When Redis is saving it can't swap things 
out to the swap file, but in your test there was no saving apparently.

So there is to investigate more for sure.

About pages swapped out by the kernel, this is a different matter that has 
nothing to do with using the disk.
What I meant there is that when the RSS goes up because of peak memory usage, 
later this pages will be flushed on disk as they'll likely no longer be used by 
the process.

From this point of view the right thing to do is to avoid setting up vm limits 
that will result in different peaks. For instance because the saving of the 
dataset takes a lot of time, and there is a super high traffic of GETs 
distributed evenly in the dataset.

Original comment by anti...@gmail.com on 2 Dec 2010 at 3:09

GoogleCodeExporter commented 8 years ago
> From this point of view the right thing to do is to avoid setting up vm 
limits that will result in different peaks. For instance because the saving of 
the dataset takes a lot of time, and there is a super high traffic of GETs 
distributed evenly in the dataset.

But how do I set VM limits that wouldn't result in different peaks?  It's 
already not respecting the limits I'm setting on VM.  If I have to increase the 
vm-max-memory so much that the data fits into memory then what problem have I 
solved by using VM?  Note that the server running without VM sits at 3.8g and 
doesn't explode in size under load like it does with VM on.

Original comment by bretthoe...@gmail.com on 2 Dec 2010 at 5:36

GoogleCodeExporter commented 8 years ago
I'm still messing with settings and redis-stat, but incase it helps anyone here 
is the redis-stat vmstat output for my TYPE on every key: 
http://pastebin.com/QKt6JHLX

I did see this once,

[9359] 02 Dec 17:48:41 # WARNING: vm-max-memory limit exceeded by more than 10% 
but unable to swap more objects out!

Which is odd to me, the swap file seems to be more than enough for this data 
set, I'm not sure under what circumstances this could happen.

Original comment by bretthoe...@gmail.com on 2 Dec 2010 at 6:05