timloo / memcached

Automatically exported from code.google.com/p/memcached
0 stars 0 forks source link

PLEASE do not remove cachedump, better dumping feature needed #256

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Hi Guys.

Sorry for the long post, i wrote it to describe some very big problem with 
memcached (and solution).

My company implemented adserver that handles tens of millions impressions daily 
by extensively using memcached. We use memcached both to cache data but also 
for staging SQL writes. And to my knowledge it is (as of today) only available 
tool that can scale writes to SQL (redis because of their reclaim policy is 
totally ususable and other K-V storage tools are out of the equation because 
they write data to disk).

So we run tens of thousands of writes per minute through memcached, then we 
analyze the data every minute and write/update 100-200 sql rows with aggregated 
data. We scaled the server from about 40-50 requests / seconds to more than 800 
so it works great.

But we got a problem related to LRU/"lazy" reclaim. The cache is filling all 
available memory and then there are some evictions becuase they keys have 
different expire time (some of them just 5 seconds, the others 24 hours).

As a workaround we used cachedump, to get a list of keys, then issue a GET 
command so the key is immediately reclaimed. And it works, the only problem is 
that we can't eg. dump whole 10 million keys, because the dump is limited.

To see how bad it is without this kind of "fast" reclaim - after 20-30 hours we 
have about 2GB of outdated keys that occupy just one SLAB. So we can't 
accumulate for different traffic patterns because all slabs are taken. While 
the non-expired set is like 30mb, so 1970mb out of 2000 is a waste. So with RAM 
66 TIMES bigger than actually "needed", without cachedump we'd still got 
evictions.

So can you please make "improved dump" a much needed feature request? I saw 
posts by many other people asking about this. Maybe include command line option 
to turn this ON if you're concerned about security?

If that's not appropriate place to make feature requests, can you please direct 
me there.

Maybe it will be possible to make separated low priority thread that'll scan 
the key list and issue get from time to time. I'm a C++ coder, how hard will it 
be to make? Would it require partiall or full lock of some important shared 
resource so it'd be problematic (like whole item list). Maybe it'd be possible 
to fork the process (copy on write) so it'd have access to the whole list and 
then just issue GETs to parent using text protocol?

Thanks,
Slawomir.

Original issue reported on code.google.com by psla...@wp.pl on 26 Feb 2012 at 8:37

GoogleCodeExporter commented 9 years ago
Why not just run two memcached instances? One for short expiration times, one 
for longer.

Or as a grosser hack, pad items with longer expiration times so they occupy a 
different slab. Since each slab has its own LRU, you'd trade a few bytes of 
overhead (it's not much at the smaller sizes) for "fast" reclaim.

But pulling the full list of keys back so you can find and fast reclaim some is 
definitely the wrong way to do that. I tried to start a thread on the mailing 
list a while ago for ideas on stats cachedump but nobody really responded to 
it. In most cases what people want out of it isn't the best usage of resources, 
and yours seems to be along those lines as well.

*but*, having said that, I have given thought recently to improving the 
efficiency given your example traffic pattern. Given a few months time 
memcached will probably just handle your situation better out of the box, and 
you can use one of the above workarounds for now without having to change core 
code and rely on your own fork in production.

For other people reading this issue; this is the sort of thing for a thread on 
the mailing list. This isn't a bug report. If it were more focused on "I have 
this traffic pattern and it's not very efficient" that would be more on target.

If you wish to discuss this further please hit up the mailing list or open a 
bug report on the real issue.

thanks!

Original comment by dorma...@rydia.net on 26 Feb 2012 at 9:35