Closed GoogleCodeExporter closed 9 years ago
You should be able to do this in a storage engine, and we might offer it as a
startup
option at some point, but as a core feature this won't be happening anytime
soon.
- existing behavior is currently a largely desireable feature, especially with
"0
expire" items". Most users will end up setting items that are never fetched
again,
which will end up at the end of the LRU.
- for most users, they'll probably end up in a situation where they'll fill up
the
cache with 0 expire items and nothing else will work anymore. I know our sites
would
end up completely broken by enabling this.
- with enough free cache memory, this isn't often an issue anyway
- implementing it isn't simply a matter of "don't evict it if it's going to be
expired". We'll need to track multiple item lists, items will need to know what
lists
they are part of so they can be tracked.
If you need this today, I would recommend running a side pool of memcached's
with the
-M (don't evict) option enabled. You can use a client wrapper to decide that
passing
an expiration time of 0 means to use this other cluster of memcached instances.
You
can be up in running in a few minutes if this is your desired behavior.
Original comment by dorma...@rydia.net
on 22 Apr 2010 at 8:32
Dormando. Thank you for your response.
I think you have misunderstood my intention.
I was suggesting to use something like "-1" for sticky items.
Since we can have 0, seconds, and unix time, we can reserve something other
than these to indicate sticky
ones.
Thank you.
Original comment by tru64ufs@gmail.com
on 23 Apr 2010 at 1:49
Same answer in either case; it's complicated and mixing LRU and non-LRU in the
same
memory pool only works well if you're really sure of what you're doing. It's
best to
use completely separate pools since you're bound to fill up memory with
unexpireable
items unless you're very careful.
Original comment by dorma...@rydia.net
on 23 Apr 2010 at 2:07
The challenge here is that inserting special values also implies that the app
knows
what to do with them. I tend to agree with dormando that it could totally make
sense
to do this with an engine, even the "default" engine, to be able to ascribe new
meanings to the expiration field as you describe.
This is probably the safest thing to do, since you'd want to know the app
interprets
things correctly and be confident the app/memcached+engine combination won't
cause
the "unexpirable" situation dormando describes above.
Original comment by ingen...@gmail.com
on 23 Apr 2010 at 6:31
Hello, I'm Joon..
If one memcached pool is shared by several applications, the mistake of one
application can make the memcached pool to be filled with sticky items. In that
case, the application gives bad effects on other applications. This case should
be
avoided, I think..
But, if one memcached pool is only used by one application, the result of
memcached
pool filled with sticky items is entirely up to the application. In this case,
the
application has the responsiblilty on the result. And, this situation might be
detected and corrected in the testing phase of the application before deployed.
If filling up all memory with sticky items is a problem, the memory area can be
divided into two types of memory. That is, one memory area for sticky items and
the
other memory area for non-sticky(expirable and evictable) items. And, the size
of
memory area for sticky items can be adjusted with the start option of memcached
or
other configurable methods. (In this case, the implementaiton will be more
complex.)
Original comment by jhpark...@gmail.com
on 23 Apr 2010 at 9:54
The easiest way to do this is with a meta engine that routes your request to a
differently configured
inferior engine.
Bucket engine will do this out of the box. You can mix LRU and contained
non-LRU without them
competing for memory at all.
Original comment by dsalli...@gmail.com
on 23 Apr 2010 at 3:44
Dustin is right, but I don't think it supports Joon or tru64ufs's needs.
As I understand it, they are looking for a single namespace where an expiration
of 0
means LRU and a MAX_INT (or whatever) means don't LRU. In this case, one engine
would have slightly different behavior for these two expiration values.
To dormando's point, there could be memory issues if mixing these and the app
did not
behave correctly, so be careful if using an engine with this behaviro.
Original comment by ingen...@gmail.com
on 26 Apr 2010 at 7:49
Matt, Dustin, and Dormano.
I was just suggesting one way of implementing the feature without changing too
much memcached internals.
Maybe we can use a certain prefix for this. Doesn't matter.
I think you folks are worrying about user error case here.
We need to provide an easy way for certain items to be sticky, and if we
provide them with right error, and
statistics (the number of stick items and memory they occupy), I don't think it
is too much trouble.
Memcached may provide default max sticky items memory percentage (quota) like
10%, and option to adjust.
Or we can set aside one slab class for that particular purpose with a quota.
And then we can set aside a prefix for that particular purpose just like moxi
front-cache.
ENOSPC error can happen even now (even with LRU) for certain situation.
ENOSPC can happen even for sticky buckets as well.
That's up to user to decide what to do, memcached just have to provide ways to
tell users what's going on.
I think it is application's job to provide flexible scheme as long as it does
not violate its design philosophy or
deviates too much from current protocol.
BTW, your contribution to memcached are just fabulous, indeed. I worship your
guys everyday (we rely on
memcached stability 24x7)
Thank you.
Original comment by tru64ufs@gmail.com
on 26 Apr 2010 at 9:09
Yes, we are worrying about user error :) Software should do what people expect;
changing default behavior is something we take seriously. Even a minor tweak
like in
1.4.3 broke a lot of clients.
So to be clear, we're not saying that it's impossible or will never happen, but
we're
saying that it's not going into the default engine.
Right now the default engine lets you turn on or turn off expirations wholesale.
Anything inbetween is going to have differing user needs, and should be
explicit so
people know what to expect. That's been our experience as the best approach
with the
software.
As I first described, you can very easily get this now by wrapping your client
and
shunting special expiration keys to another cluster with the LRU disabled. In
the
future it should be possible by using the bucket engine (after 1.6.0 is out).
Then we
can route between two engines based on "something", which could be the
expiration
timeout.
That allows someone to easily either modify the engine so it uses one big pool,
or
run two engines with different slices of memory in the same instance and route
magically. In both cases the behavior should be available through an out of the
box
instance of memcached post-1.6.0, but a user must explicitly enable it and can
tune
it as their app needs.
So, again, this is more of a "no we don't want to confuse people who don't care
+ we
communicate the project as being a lossy cache", along with a "but nothing's
stopping
you from doing that these three different ways, though two require waiting for
1.6"
Original comment by dorma...@rydia.net
on 27 Apr 2010 at 12:25
Original issue reported on code.google.com by
tru64ufs@gmail.com
on 22 Apr 2010 at 3:17