mhgbrown / cached_resource

Caching for ActiveResource
MIT License
80 stars 28 forks source link

:race_condition_ttl has no effect #19

Closed meismann closed 8 years ago

meismann commented 9 years ago

The abstract class ActiveSupport::Cache::Store prescribes several options, one of which is race_condition_ttl. However, when calling

cached_resource cache: Activesupport::Cache::MemoryStore.new(race_condition_ttl: 10)

in class context, that option remains without effect. This is because the underlying cache does not know about a resource of the same kind is about to be retrieved already, and therefore fires a new request as soon as somebody tries to pull the resource from the cache again. This can be prevented by using ActiveSupport::Cache::Store#fetch instead of #write. #write cannot yield.

In general I do not think it to be a good idea to take the decision "to reload or not to reload" away from the underlying cache object and implement the decision making yourself with #read and #write. I am preparing a PR to fix the said deficiency, implementing the usage of #fetch. Please, let me know what you think.

mhgbrown commented 9 years ago

Thanks for your submission. This definitely sounds interesting. Would you mind providing a more concrete example so I can understand what you are talking about better?

meismann commented 9 years ago

I think the problem should be abvious when the function of :race_condition_ttl is clear. However, here is an example:

This is how it currently works, given a resource the request of which takes 5 seconds to return:

time in secs request / cache operation
0 Resource read from cache, is nil, therefore request A gets fired
2 Resource read from cache, is nil, therefore request B gets fired (Resource cannot be read from cache yet because this is only second 2, and request A will take three more seconds to return and write to cache)
5 Request A returns, writing reply to the cache and deliver the result to the code that asked for it in second 0
7 Request B returns, writing reply to the cache and deliver the result to the code that asked for it in second 2

The fact that request B could occur at all is called "race condition". This is undesirable, because

Consider this example, which is the same setup plus race_condition_ttl: 5 set:

time in secs request / cache operation
0 Resource read from cache, is nil, therefore request A gets fired, cache blocks any further requests.
2 Resource read from cache, cache is still empty. Reading the cache will block for race_condition_ttl seconds, to give request A enough time to return.
5 Request A returns, writing reply to the cache and deliver the result to the code that asked for it in second 0 and second 2.

Please note that in second 2 the cache knows that a request has already been fired. Only therefore does it block the second read. In our Rails scenario the cache knows it because it executes the request when yielding the request execution as a block in #fetch. When you only try to #read the cache and then go and fire the request unbeknown to the cache, it has no chance to know that it has to wait for that request to return and block untill then.

meismann commented 9 years ago

Here is a description of the race_condition_ttl option: http://guides.rubyonrails.org/caching_with_rails.html#activesupport-cache-store

mhgbrown commented 9 years ago

Thanks for the explanation. I haven't encountered that before! What does this mean for compatibility?

meismann commented 9 years ago

Compatibility with what?

mhgbrown commented 9 years ago

With Rails for example. Would making this change break v4.0.x or 3.0.x or something else? Has the fetch method always existed/yielded in ActiveSupport::Cache::Store (well, its implementations)?

meismann commented 9 years ago

Regarding your latter question: the fetch method exists at least since 3.0.0-stable and it did yield. A quick research will let you know: https://github.com/rails/rails/blob/3-0-stable/activesupport/lib/active_support/cache.rb

mhgbrown commented 9 years ago

Ok great.

Daniel-ltw commented 8 years ago

Implemented and put in place on version 4.2