jpodwys / cache-service

A tiered caching solution for JavaScript.
MIT License
12 stars 6 forks source link

cache-service.set() calls cb() when first cache module set() completes #22

Open JaredReisinger opened 7 years ago

JaredReisinger commented 7 years ago

At present, the user-supplied callback is passed directly to the first cache module in the list (https://github.com/jpodwys/cache-service/blob/master/cacheService.js#L158). This means that the calling code continues before the value has been set in all the caches, and there's no easy way to know when the value has been fully propagated. Also, any errors from cache modules 2-N are completely lost.

If the caller doesn't want the complete/final result, they don't need to pass a callback at all. If they do pass one, it should probably only be called once all the cache modules have completed. To me, that's a pretty easy design choice. More difficult is what to do about partial failures: if two of three cache modules has an error on set, what error (if any) should be passed to the user-supplied callback? (There are also cache module synchronization issues in this case, but it's probably up to the caller/application to understand what the "right thing" to do would be.)

I have an idea for how the .set() callback could be handled, and will try to put together a PR that addresses it.

jpodwys commented 7 years ago

Executing the .set() callback after the first cache completes setting was an intentional decision, though perhaps not the right decision, to improve speed. I assumed that, so long as the set was successful with the primary cache and therefore subsequent gets to cache-service would successfully retrieve the expected data, users wouldn't want to block synchronous execution until a distributed cache, or in your case a write to disk, completes.

However, when I first published this in early 2015, I didn't consider the fact that cache-service swallows errors encountered by caches other than the primary cache. That's a compelling reason to change the existing behavior. Additionally, the delay in response time users might encounter by waiting for set commands to propagate to all caches when using a good redis/memcached provider is, in my experience, often negligible (between 5ms and 50ms when using heroku and a heroku add-on cache instance, but slower in many other cases).

However, switching this behavior will make setting slower for all cache-service users. I'm wondering if delaying .set() responses for all users by default is acceptable or if we should make this feature opt-in so as to not delay callback execution for all users. After all, 50+ms may be a realistic and unacceptable response time increase for some consumers.

That being said, npm-stat shows there are very few users (1,647 downloads in January) of this component right now. Perhaps making this behavior opt-in is unimportant, especially given the added code complexity and testing burden it would add.

What are your thoughts?