Open thedrow opened 9 years ago
Correct! We can't really do anything about it, the collaboration between event loops has to happen at the socket level. I would love for this to work, but it seems hard or impossible without libmemcached-level support. Any ideas?
On 10 dec 2014, at 09:33, Omer Katz notifications@github.com wrote:
Is there a way for this library to support using an event loop like psycopg2 does? I have filed https://bugs.launchpad.net/libmemcached/+bug/1369598 upstream but got no response. I know that libmemcached supports async I/O through setting a behaviour but it's unclear how will that collaborate with event loops like gevent or eventlet or even asyncio. I believe that pylibmc will continue to block anyway. Am I correct?
— Reply to this email directly or view it on GitHub.
We've recently written a new memcached library https://github.com/ohmu/omcache that's roughly equivalent but also has gevent support. The python bindings have been made with cffi. It supports py2.6/2.7/3.3/3.4 and pypy. pgmemcache also now supports it. Note that it doesn't support the ASCII protocol and some of the libmemcached behaviors.
@ormod, interesting stuff! How does omcache support cooperative multitasking? I read the code briefly and was unable to determine much apart from asyncns probably being the supporting mechanism. Does it use libev under the hood or how does the hooking into gevent work?
OMcache has a function (omcache_poll_fds
) that returns the list of file descriptors that it wants to poll and a timeout for them. The calling application can then poll them using whatever mechanism it likes and after the poll returns the application must call omcache_io
with a zero timeout (so it won't block) to process the data.
omcache.py implements this by allowing users to pass it a select function which it'll then use to poll the fds, so enabling gevent support works like this:
import omcache, gevent.select
mc = omcache.OMcache(["127.0.0.1"], select=gevent.select.select)
mc.stat()
See https://github.com/ohmu/omcache/blob/master/omcache.py#L298
The only problem with CFFI is that it is slower then a C extension for CPython but all in all wow. This is really well done. How stable is it?
Maybe we should test with https://github.com/douban/greenify? I'll be very very surprised if that would work.
It might certainly work, I don't see why not? It might have some unexpected consequences, it would mean each client instance is greenlet local.
Is it a good idea? Not really! ;)
If it's greenlet local then using https://github.com/lericson/pylibmc/blob/master/src/pylibmc/pools.py#L55 with monkeypatched threads is what you want. Any other implications?
@lericson Do you think it's worthwhile to set up an acceptence test that will check if greenify works with pylibmc?
@thedrow I'm not sure how we want this to work, especially with regards to threading problems. Reusing a single client across greenlets would lead to race conditions.
I guess what you want is, in a sense, a kind of queuing system. Dispatch memcached operation, switch greenlet, do other work, come back when results are in.
Is there a way for this library to support using an event loop like psycopg2 does? I have filed https://bugs.launchpad.net/libmemcached/+bug/1369598 upstream but got no response. I know that libmemcached supports async I/O through setting a behaviour but it's unclear how will that collaborate with event loops like gevent or eventlet or even asyncio. I believe that pylibmc will continue to block anyway. Am I correct?