KevinQi1981 / jmemcache-daemon

Automatically exported from code.google.com/p/jmemcache-daemon
Apache License 2.0
0 stars 0 forks source link

Extensibility refactoring #4

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Great work on the daemon. I am actually using it to implement generic servers 
over the memcached protocol, but not as a 
cache server. Sort of like Twitter's kestrel. There are a few things that would 
really help in terms of being able to extend 
the platform.

1. In MemCacheDaemon.java create the pipeline factories in a protected method 
that can be overridden. That way we can 
then override certain things in the pipeline construction.

2. Similarly in the ChannelPipelineFactory classes the construction of the 
different parts of the pipeline could be protected 
method calls. createCommandDecoder, createCommandHandler, 
createResponseEncoder. That way the command handler 
can easily be overridden.

3. In MemcachedCommandHandler break message received into parts. A protected 
method for each of the commands. In 
my case, I have a long running (>1 second) process I want to run in a thread 
for the get, and then put the message on the 
async channel.

My main goal is to be able to process get requests in a thread and put 
responses on the async channel. This is b/c some of 
the operations we need to execute can be long running processes. I guess 
another extension point would be to add this 
capability to the framework itself. An option for "threaded" mode in which 
operations are executed in Executor jobs and 
results are put back on the channel.

I don't mind making some changes and submitting as diffs if you are open to 
that?

Original issue reported on code.google.com by jhorman on 6 Aug 2009 at 10:30

GoogleCodeExporter commented 9 years ago
Didn't mean for this to go in as "defect", didn't see an option though.

Original comment by jhorman on 6 Aug 2009 at 10:31

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
Patch file if you are interested (better version, though I don't know hg well)

Original comment by jhorman on 6 Aug 2009 at 11:03

Attachments:

GoogleCodeExporter commented 9 years ago
Now that I am reading more about netty seems like I don't have to do my own 
threading. If the 
NioServerSocketChannelFactory constructed had a configurable number of workers 
it would solve my problem, I 
think.

Original comment by jhorman on 7 Aug 2009 at 3:45

GoogleCodeExporter commented 9 years ago
Some good suggestions here, I'll try and integrate your changes when I get a 
moment to sit down to it.

Original comment by ryan.daum on 12 Aug 2009 at 9:09

GoogleCodeExporter commented 9 years ago
Patch accepted and pushed. 

If you have particular backends you're attaching the protocol handler to, it 
would be 
interesting to submit them. I've thought for example a starling/kestrel type 
facade in 
front of e.g. ActiveMQ might be handy.

Original comment by ryan.daum on 13 Aug 2009 at 12:23

GoogleCodeExporter commented 9 years ago
The main thing we are trying to do is provide a single interface to the data 
store and cache. On cache miss 
lookup the value in mysql via a hibernate query. We are also looking to cache 
via ehcache so we have a lot of 
control over the expirations, disk access, etc.

To do this though I believe we need to have a multithreaded server. I know 
netty supports threaded workers, 
but it seems like it is probably the wrong approach to just enable that. Seems 
like a much better approach 
would be to only launch a worker thread if we have to go to the database. I am 
a little unclear on how the 
memcached protocol and clients work in that regard. Can I now just override 
some of the methods from the 
patch, look in cache, then fork a thread look in the db and write to the 
channel later?

We also have been experimenting with front ending a SOLR query index where the 
memcached key is the 
query. The only problem with that is that the cache keys are currently ASCII 
and we want to support UTF8. 
There is also a query size limit in spymemcached that seemed a little low.

Original comment by jhorman on 21 Aug 2009 at 1:10

GoogleCodeExporter commented 9 years ago
I too ran into the issue (at my last job) with spymemcached's key size limit. 
For that 
reason alone, I stayed with the other memcached client library for Java, which 
had its 
own set of issues.

With decent key sizes, you could always encode/decode your keys to ASCII.

I still don't fully understand your use case with the asynchronous workers, so 
I can't 
really comment. The Cache stuff in the daemon is thread safe, so you ought to 
be 
able to do what you want, but I'm not entirely happy with the way the locking 
happens (locks the whole cache, rather than particular entries) right now and 
it's 
subject to change ... soon.

Also, re: caching with ehcache, again what you have to be careful of is 
locking; make 
sure that operations like INCR and DECR are atomic operations...  

Original comment by ryan.daum on 21 Aug 2009 at 2:24

GoogleCodeExporter commented 9 years ago
My concern (sorry to be spamming this issue) is that a db fetch could take 5 
seconds under load, and that is a blocking fetch using the 
mysql connector. That will block the single threaded netty. If you expose 
workerCount on MemCacheDaemon:75 I could use netty workers. I 
think that would be a valid change, to expose that.

That said, seems like using workers sort of defeats the purpose of nio, unless 
I am confused about how the netty workers work. Seems like it 
is more likely that I would want to launch my own thread in 
MemcachedCommandHandler:handleGets. That way I benefit from the async io 
around the proto handling and only need a thread on cache miss.

Something like (pseudocode)

protected void handleGets(ChannelHandlerContext channelHandlerContext, 
CommandMessage command, Channel channel) {
  MCElement[] results = get(command.keys.toArray(new String[command.keys.size()]));
  if (results != null) {
    ResponseMessage resp = new ResponseMessage(command).withElements(results);
    Channels.fireMessageReceived(channelHandlerContext, resp, channel.getRemoteAddress());
  } else {
    worker.work(new Runnable() {
      MCElement[] results = fetchInDb();
      ResponseMessage resp = new ResponseMessage(command).withElements(results);
      Channels.fireMessageReceived(channelHandlerContext, resp, channel.getRemoteAddress());
    });
  }
}

And so I am wondering if that is safe, both thread safe and memcache protocol 
safe.

Not trying to get you to program for me, just trying to understand how to apply 
this in "slow io" situations.

Original comment by jhorman on 21 Aug 2009 at 7:17