solomonHou / redis

Automatically exported from code.google.com/p/redis
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

[FEATURE REQUEST] A command to implement releasing of allocated memory pages #383

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
Some of us are using Redis as a FIFO queue as described here: 
http://www.rediscookbook.org/implement_a_fifo_queue.html

It's operating almost perfectly, but there is a minor issue. By design Redis 
never releases allocated memory back to system pool, because it actually reuses 
the same memory pages again and again. Thus, please imagine the following 
situation.

We have several services using FIFO queue based on Redis. Some of services are 
writing to the queue, some are wiping it. Normally the queue consists of 3-4 
elements and requires a very small amount of memory. Then suddenly one of the 
services fails. The queue grows up to 70000 elements, for example. Redis 
allocates extra memory for storing its database. Sometime later the fallen 
service goes up and cleans up the queue. The latter now has size of 3-4 
elements again. But Redis will never release reserved for "gone" items spare 
memory, in spite of the fact it has no need in such volume of RAM anymore.

It would not be a considerable problem, but some of us run Redis inside an 
OpenVZ-container. And in the case of switching to the OOM (out-of-memory) 
state, the kernel will do kill exactly the Redis process, though it does not 
really consume the whole requested memory pool. It's not the situation we 
should like to run into.

So, there could be a decision. For example, one can implement a command such as 
"MEMDEFRAG" or something like this, that will force Redis to release allocated, 
but not really used memory. This would help to control memory consumption 
directly from our applications, considering their logic.

Up to the present we solve the described problem by stupid scheduled restarts 
of Redis process (e.g. by cron). But I think, it is not a desirable method.

Sorry for my English, it's not my native language.

Original issue reported on code.google.com by stanisla...@gmail.com on 19 Nov 2010 at 6:19

GoogleCodeExporter commented 8 years ago
People always ask for this and I don't understand it. If memory is constrained, 
set maxmemory to whatever you can spare and you're done. In addition, you could 
enable VM. 

Original comment by macten...@gmail.com on 20 Nov 2010 at 10:07

GoogleCodeExporter commented 8 years ago
Setting maxmemory is not exacly what we need. "Maxmemory" means that data will 
be lost in case of overflow. But we don't want to loose anything at all. We 
just need to have an assurance that Redis's process won't be ever terminated by 
savage OOM killer.

Original comment by stanisla...@gmail.com on 20 Nov 2010 at 6:55

GoogleCodeExporter commented 8 years ago
I think this request makes sense.
There are several (real & not malicious) scenarios that I can imagine 
were 'not releasing memory' will result in a lot of pain for the users.

The describsed situation is one of them.

enabling VM (or swapping or being OOMkilled) just because with don't want to 
free() already UNUSED memory doesn't seem a good trade of.

Does this happen with TCmalloc & the internal alocator ?

Original comment by miguel.filipe on 20 Nov 2010 at 9:07