Open GoogleCodeExporter opened 8 years ago
I am experiencing the same issues with VM. Here are my observations:
1. the redis memory consumption reported by htop keeps growing steadily. In the
last 24 hour it grew by almost 3GB on the master and about 500MB on the slave
2. Right now, the only way to reduce redis memory consumption again is to do a
redis shutdown and start it up again
3. htop reports 30% more memory than set with vm_conf_max_memory . I have tried
several different values for vm_conf_max_memory, it constantly is 30% more
4. Tested with version 2.0.1 and 2.0.4 . Both did behave the same
system: ubuntu lucid 10.04, 64bit, 2.6.32-25-server
redis info:
redis_version:2.0.1
redis_git_sha1:068eb3bf
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:20071
uptime_in_seconds:82787
uptime_in_days:0
connected_clients:97
connected_slaves:1
blocked_clients:0
used_memory:6442272320
used_memory_human:6.00G
changes_since_last_save:70815504
bgsave_in_progress:0
last_save_time:1291203749
bgrewriteaof_in_progress:0
total_connections_received:65150
total_commands_processed:274569814
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:1
role:master
vm_conf_max_memory:6442450944
vm_conf_page_size:8192
vm_conf_pages:50304000
vm_stats_used_pages:636877
vm_stats_swapped_objects:514780
vm_stats_swappin_count:352597
vm_stats_swappout_count:867520
vm_stats_io_newjobs_len:0
vm_stats_io_processing_len:0
vm_stats_io_processed_len:0
vm_stats_io_active_threads:0
vm_stats_blocked_clients:0
db2:keys=604877,expires=0
db3:keys=248,expires=0
Original comment by patrick....@gmail.com
on 2 Dec 2010 at 9:47
Hi, Issue 393 may be interesting for you
Original comment by t.br...@gmail.com
on 2 Dec 2010 at 2:24
Hello, there are a mix of issues here probably.
The first is: Redis is not able to really know the exact amount of memory used.
The RSS is not a good measure, as it will most of the times reflect the *max*
memory used by the process in the recent times, so we can't use it for
maxmemory nor vm-max-memory, as freeing a few objects will not have any effect
on the RSS reporting.
So what Redis does to estimate the memory used is summing all the allocated
pieces of memory from the point of view of malloc().
In most systems due to overheads and fragmentation this will result, more or
less, in 30% - 50% more memory used. There is to do some math and reduce the
vm-max-memory parameter accordingly.
Instead about the issue of vm-max-memory set to 350MB and the actual RSS being
3 GB, this can be likely due to the fact that the *peak* memory usage was of
3GB.
The RSS will not go down even after a flushall command. Even if actually most
of this pages will be swapped out and will be reusable by other processes. Here
the general rule is to set vm-max-memory to the peak memory.
So the question is, what operation had the effect of using more memory?
Probably calling TYPE against all the keys in a loop will have the effect of
loading everything in memory almost, as the server will not swap out things
with the same speed. Not sure about this but you can check in a very simple
way.
Download redis-tools from github.com/antirez/redis-tools and use ./redis-stat
and ./redis-stat vmstat in different terminals to see what peak of used memory
you reach while running the script.
Cheers,
Salvatore
Original comment by anti...@gmail.com
on 2 Dec 2010 at 2:35
> as the server will not swap out things with the same speed
I think this was the part I didn't expect and don't understand. Why wouldn't
redis be able to tell that the value it needs to load won't fit under the
current memory limit, and thus swap out objects to make room for it *before*
loading the object? It seems like without doing that the vm-max-memory
parameter doesn't actually do much (as seen by Redis thinking it's using
155.43M when it's holding 3.2GB worth from the OS).
My test using TYPE was just a simple way to exercise the VM, the same problem
occurs for us under normal load (doing various operations on hashes, sets,
zsets mostly).
When you say that the unused pages should page out, you're talking about OS
swapping and not Redis-VM, right? So the recommended way to run multiple
redis-server instances on one high memory machine is to just let the OS swap
handle everything? At that point should I just disable Redis-VM altogether?
Original comment by bretthoe...@gmail.com
on 2 Dec 2010 at 2:56
bretthoerne I'm not sure if actually this is what is happening. You can test it
with Redis load so we can investigate further if you already have an easy way
to replicate the "TYPE" test. I need to check better the implementation but
actually I did not expected Redis to use too much peak memory under normal
conditions *unless* it was saving. When Redis is saving it can't swap things
out to the swap file, but in your test there was no saving apparently.
So there is to investigate more for sure.
About pages swapped out by the kernel, this is a different matter that has
nothing to do with using the disk.
What I meant there is that when the RSS goes up because of peak memory usage,
later this pages will be flushed on disk as they'll likely no longer be used by
the process.
From this point of view the right thing to do is to avoid setting up vm limits
that will result in different peaks. For instance because the saving of the
dataset takes a lot of time, and there is a super high traffic of GETs
distributed evenly in the dataset.
Original comment by anti...@gmail.com
on 2 Dec 2010 at 3:09
> From this point of view the right thing to do is to avoid setting up vm
limits that will result in different peaks. For instance because the saving of
the dataset takes a lot of time, and there is a super high traffic of GETs
distributed evenly in the dataset.
But how do I set VM limits that wouldn't result in different peaks? It's
already not respecting the limits I'm setting on VM. If I have to increase the
vm-max-memory so much that the data fits into memory then what problem have I
solved by using VM? Note that the server running without VM sits at 3.8g and
doesn't explode in size under load like it does with VM on.
Original comment by bretthoe...@gmail.com
on 2 Dec 2010 at 5:36
I'm still messing with settings and redis-stat, but incase it helps anyone here
is the redis-stat vmstat output for my TYPE on every key:
http://pastebin.com/QKt6JHLX
I did see this once,
[9359] 02 Dec 17:48:41 # WARNING: vm-max-memory limit exceeded by more than 10%
but unable to swap more objects out!
Which is odd to me, the swap file seems to be more than enough for this data
set, I'm not sure under what circumstances this could happen.
Original comment by bretthoe...@gmail.com
on 2 Dec 2010 at 6:05
Original issue reported on code.google.com by
bretthoe...@gmail.com
on 2 Dec 2010 at 3:17