Closed GoogleCodeExporter closed 9 years ago
Didn't mean for this to go in as "defect", didn't see an option though.
Original comment by jhorman
on 6 Aug 2009 at 10:31
[deleted comment]
Patch file if you are interested (better version, though I don't know hg well)
Original comment by jhorman
on 6 Aug 2009 at 11:03
Attachments:
Now that I am reading more about netty seems like I don't have to do my own
threading. If the
NioServerSocketChannelFactory constructed had a configurable number of workers
it would solve my problem, I
think.
Original comment by jhorman
on 7 Aug 2009 at 3:45
Some good suggestions here, I'll try and integrate your changes when I get a
moment to sit down to it.
Original comment by ryan.daum
on 12 Aug 2009 at 9:09
Patch accepted and pushed.
If you have particular backends you're attaching the protocol handler to, it
would be
interesting to submit them. I've thought for example a starling/kestrel type
facade in
front of e.g. ActiveMQ might be handy.
Original comment by ryan.daum
on 13 Aug 2009 at 12:23
The main thing we are trying to do is provide a single interface to the data
store and cache. On cache miss
lookup the value in mysql via a hibernate query. We are also looking to cache
via ehcache so we have a lot of
control over the expirations, disk access, etc.
To do this though I believe we need to have a multithreaded server. I know
netty supports threaded workers,
but it seems like it is probably the wrong approach to just enable that. Seems
like a much better approach
would be to only launch a worker thread if we have to go to the database. I am
a little unclear on how the
memcached protocol and clients work in that regard. Can I now just override
some of the methods from the
patch, look in cache, then fork a thread look in the db and write to the
channel later?
We also have been experimenting with front ending a SOLR query index where the
memcached key is the
query. The only problem with that is that the cache keys are currently ASCII
and we want to support UTF8.
There is also a query size limit in spymemcached that seemed a little low.
Original comment by jhorman
on 21 Aug 2009 at 1:10
I too ran into the issue (at my last job) with spymemcached's key size limit.
For that
reason alone, I stayed with the other memcached client library for Java, which
had its
own set of issues.
With decent key sizes, you could always encode/decode your keys to ASCII.
I still don't fully understand your use case with the asynchronous workers, so
I can't
really comment. The Cache stuff in the daemon is thread safe, so you ought to
be
able to do what you want, but I'm not entirely happy with the way the locking
happens (locks the whole cache, rather than particular entries) right now and
it's
subject to change ... soon.
Also, re: caching with ehcache, again what you have to be careful of is
locking; make
sure that operations like INCR and DECR are atomic operations...
Original comment by ryan.daum
on 21 Aug 2009 at 2:24
My concern (sorry to be spamming this issue) is that a db fetch could take 5
seconds under load, and that is a blocking fetch using the
mysql connector. That will block the single threaded netty. If you expose
workerCount on MemCacheDaemon:75 I could use netty workers. I
think that would be a valid change, to expose that.
That said, seems like using workers sort of defeats the purpose of nio, unless
I am confused about how the netty workers work. Seems like it
is more likely that I would want to launch my own thread in
MemcachedCommandHandler:handleGets. That way I benefit from the async io
around the proto handling and only need a thread on cache miss.
Something like (pseudocode)
protected void handleGets(ChannelHandlerContext channelHandlerContext,
CommandMessage command, Channel channel) {
MCElement[] results = get(command.keys.toArray(new String[command.keys.size()]));
if (results != null) {
ResponseMessage resp = new ResponseMessage(command).withElements(results);
Channels.fireMessageReceived(channelHandlerContext, resp, channel.getRemoteAddress());
} else {
worker.work(new Runnable() {
MCElement[] results = fetchInDb();
ResponseMessage resp = new ResponseMessage(command).withElements(results);
Channels.fireMessageReceived(channelHandlerContext, resp, channel.getRemoteAddress());
});
}
}
And so I am wondering if that is safe, both thread safe and memcache protocol
safe.
Not trying to get you to program for me, just trying to understand how to apply
this in "slow io" situations.
Original comment by jhorman
on 21 Aug 2009 at 7:17
Original issue reported on code.google.com by
jhorman
on 6 Aug 2009 at 10:30