bbangert / retools

Redis Tools
MIT License
138 stars 41 forks source link

Unify dogpile.cache and retools #4

Open ergo opened 12 years ago

ergo commented 12 years ago

Now that we have beaker, retools and dogpile.cache by zzzeek, maybe it would be a good idea to implement retools caching as dogpile extension? So we avoid fragmentation across solutions.

31<Ergo^>30 zzzeek, hey zzzeek so whats the status on beaker/retools/dogpile situation
31<Ergo^>30 is there any chance that dogpile and retools could get unified at some point? It seems that we get multiple caching solutions that do the same
31<Ergo^>30 (with some exceptions ofc)
19* 19rmarianski (~rmariansk@mail.marianski.com) has joined #pyramid
18<19zzzeek> ive no idae what retools is
31<Ergo^>30 hm, i think ive showed you - ben wrote a beaker drop in replacement for redis
31<Ergo^>30 + work queue - so it also serves as replacement for celery
31<Ergo^>30 zzzeek, https://github.com/bbangert/retools
23* 23merde has quit (23Ping timeout: 240 seconds23)
31<Ergo^>30 so currently we have beaker, retools and dogpile what roughly do the same - not debating implementation details here
18<19zzzeek> oh that thing
18<19zzzeek> well yes he'd build a backend that's for dogpile.cache
18<19zzzeek> he can leave it as retools
18<19zzzeek> and just provide a plugin
23* 23mr_jolly (23~simon@58.137.222.50) has left #pyramid
18<19zzzeek> this is what is awsome about dogpile.cache, there is *nothing* there practically
18<19zzzeek> its like, an interface, and a little code, and that's it
bbangert commented 12 years ago

dogpile.cache implements the dogpile lock on the per-machine level, while the retools lock is in Redis, and is global to the cluster. This prevents the cache from being repopulated by N machines in a cluster under the dogpile situation, while dogpile lock will result in every single machine recreating the cache at once.

It would be handy to have a more swappable lock implementation, as I also have zktools now which implements a lock in Zookeeper for a global lock that has no single point of failure.

zzzeek commented 12 years ago

dogpile implements the lock that you pass into it:

http://packages.python.org/dogpile/usage.html#using-a-file-or-distributed-lock-with-dogpile

so if you pass in a magic_redis_ill_lock_on_the_server(), dogpile will use that.

dogpile.cache then exposes this with the get_mutex() method of CacheBackend:

http://packages.python.org/dogpile.cache/api.html#module-dogpile.cache.api

and of course dogpile.cache is just like 80 lines of glue code that I haven't even run yet so we can reorganize it in some other way if need be.

I'm planning on using the "memcache lock" scheme for the included memcached backend, for example.

ergo commented 12 years ago

bumped for bbangert!

bbangert commented 12 years ago

I guess since there's already a global lock that works with caching in retools, I'm a bit confused what to do for this.

dogpile takes a lock function, you can use the one in retools, what else is needed/desired?

zzzeek commented 12 years ago
  1. wait for me to write dogpile.cache
  2. write a dogpile.cache plugin in retools and add the entry point magic to the setup.py
  3. the plugin will return your locking object, anything that has acquire()/release() on it, as part of the plugin API
  4. plugin API is: http://readthedocs.org/docs/dogpilecache/en/latest/api.html#module-dogpile.cache.api
bbangert commented 12 years ago

Ah good, I can add support to zktools as well.

bbangert commented 10 years ago

Welp, dogpile.cache was written afaik, now for someone to write a plugin. :)