Open ergo opened 12 years ago
dogpile.cache implements the dogpile lock on the per-machine level, while the retools lock is in Redis, and is global to the cluster. This prevents the cache from being repopulated by N machines in a cluster under the dogpile situation, while dogpile lock will result in every single machine recreating the cache at once.
It would be handy to have a more swappable lock implementation, as I also have zktools now which implements a lock in Zookeeper for a global lock that has no single point of failure.
dogpile implements the lock that you pass into it:
http://packages.python.org/dogpile/usage.html#using-a-file-or-distributed-lock-with-dogpile
so if you pass in a magic_redis_ill_lock_on_the_server(), dogpile will use that.
dogpile.cache then exposes this with the get_mutex() method of CacheBackend:
http://packages.python.org/dogpile.cache/api.html#module-dogpile.cache.api
and of course dogpile.cache is just like 80 lines of glue code that I haven't even run yet so we can reorganize it in some other way if need be.
I'm planning on using the "memcache lock" scheme for the included memcached backend, for example.
bumped for bbangert!
I guess since there's already a global lock that works with caching in retools, I'm a bit confused what to do for this.
dogpile takes a lock function, you can use the one in retools, what else is needed/desired?
Ah good, I can add support to zktools as well.
Welp, dogpile.cache was written afaik, now for someone to write a plugin. :)
Now that we have beaker, retools and dogpile.cache by zzzeek, maybe it would be a good idea to implement retools caching as dogpile extension? So we avoid fragmentation across solutions.