Since the data being written to the cache is functionally atomic, using self.mc_client.set_multi() should provide a modest reduction in traffic and protocol overhead.
I don't know how memcached works internally, thus preventing me from being able to comment on whether it would improve integrity over the current write-sequence in the event of multiple clients exhibiting race behaviour. The current model doesn't allow for invalid references outside of a timeout-at-the-same-time-as-read edge-case, and it looks like failover should work properly in the event of a cache-miss.
Since the data being written to the cache is functionally atomic, using
self.mc_client.set_multi()
should provide a modest reduction in traffic and protocol overhead.I don't know how memcached works internally, thus preventing me from being able to comment on whether it would improve integrity over the current write-sequence in the event of multiple clients exhibiting race behaviour. The current model doesn't allow for invalid references outside of a timeout-at-the-same-time-as-read edge-case, and it looks like failover should work properly in the event of a cache-miss.