With FusionCache it's very easy to handle 2 cache levels at the same time.
The 1st level is a memory cache while the second is a distributed cache: we just specify a distributed cache during the initial setup and FusionCache will automatically coordinate the dance between the 2 layers, in an optimized way, without us having to do any extra activity.
Problem
The release of FusionCache v0.18.0 saw the introduction of the SkipDistributedCache option, to allow skipping the distributed cache, even granularly for specific operations.
It seems that sometimes there are scenarios (like this ) where it may be needed to do the same thing, but for the memory cache: one example is when working in serverless/lambda architecture, where everything is distributed and there are constraints regarding the local memory regarding low availability, higher billing profiles or else.
Currently the memory cache is unskippable, so there's no way around it.
Solution
Add a new SkipMemoryCache option on the FusionCacheEntryOptions, to allow both granular control per-operation, and global default via the usual DefaultEntryOptions usage.
On top of the option itself, an additional SetSkipMemoryCache(...) method on the FusionCacheEntryOptions class should be added to set the option value in a fluent-interface way, like the other already existing methods.
Scenario
With FusionCache it's very easy to handle 2 cache levels at the same time.
The 1st level is a memory cache while the second is a distributed cache: we just specify a distributed cache during the initial setup and FusionCache will automatically coordinate the dance between the 2 layers, in an optimized way, without us having to do any extra activity.
Problem
The release of FusionCache v0.18.0 saw the introduction of the
SkipDistributedCache
option, to allow skipping the distributed cache, even granularly for specific operations.It seems that sometimes there are scenarios (like this ) where it may be needed to do the same thing, but for the memory cache: one example is when working in serverless/lambda architecture, where everything is distributed and there are constraints regarding the local memory regarding low availability, higher billing profiles or else.
Currently the memory cache is unskippable, so there's no way around it.
Solution
Add a new
SkipMemoryCache
option on theFusionCacheEntryOptions
, to allow both granular control per-operation, and global default via the usualDefaultEntryOptions
usage.On top of the option itself, an additional
SetSkipMemoryCache(...)
method on theFusionCacheEntryOptions
class should be added to set the option value in a fluent-interface way, like the other already existing methods.