Closed petar-maletic closed 3 months ago
Hi @petar-maletic , this is expected, since you also enabled fail-safe.
When enabling fail-safe the actual duration in the cache becomes the FailSafeMaxDuration
(which by default is 24h
): this is done to allow for a "logically expired" value to be usable in case of problems down the road. To do that FusionCache stores in the cache entry a LogicalExpiration
that will act as the intended Duration
, but in reality uses the FailSafeMaxDuration
a the underlying duration in the cache.
Just to be clear: when asking for a cached value which is already logically expired (but still in the cache due to fail-safe), it will be treated as effectively expired, so you won't see stale values, unless you also enabled fail-safe in the call and the factory fails in that moment, so from the outside everything remains the same.
Let me know if it's clear: have I been able to clarify how it works? I may also augment the fail-safe docs to explain what happens behind the covers, what do youthink?
Hope this helps.
Hi and ty for your answer.
Yea, kinda overlooked that doc but now I understand. If failsafe is enabled the value will be kept for failsafe purposes for FailSafeMaxDuration and if anything triggers fail safe that value will be used and put back into cache for FailSafeThrottleDuration duration.
One thing that confuses me is if I have specified DistributedCacheDuration (5mins) with failsafe enabled, it picks up FailSafeMaxDuration which now makes sense after explanation, but if I completely remove DistributedCacheDuration setting and FailSafe is still enabled, TTL I see in Redis is not 24h, it is now 5mins (Duration setting).
Hi @petar-maletic , somehow I missed your last reply, sorry for that.
... but if I completely remove DistributedCacheDuration setting and FailSafe is still enabled, TTL I see in Redis is not 24h, it is now 5mins (Duration setting).
This is strange, may I ask how are the other options set?
The relevant code is this , and as you can see what happens is (comments for the important parts):
var res = new DistributedCacheEntryOptions();
// PHYSICAL DURATION
TimeSpan physicalDuration;
TimeSpan durationToUse;
// FALLBACK LOGIC (DISTRIBUTED -> NORMAL)
durationToUse = DistributedCacheDuration ?? Duration;
if (IsFailSafeEnabled == false)
{
// FAIL-SAFE DISABLED -> PHYSICAL DURATION IS THE SAME AS THE LOGICAL ONE
physicalDuration = durationToUse;
}
else
{
// FAIL-SAFE ENABLED -> FALLBACK LOGIC (DISTRIBUTED -> NORMAL)
var failSafeMaxDurationToUse = DistributedCacheFailSafeMaxDuration ?? FailSafeMaxDuration;
// INCOHERENT CHECK... MAYBE IS THIS?
if (failSafeMaxDurationToUse < durationToUse)
{
physicalDuration = durationToUse;
// LOGGING HERE...
}
else
{
physicalDuration = failSafeMaxDurationToUse;
}
}
res.AbsoluteExpiration = FusionCacheInternalUtils.GetNormalizedAbsoluteExpiration(physicalDuration, this, false);
return res;
Can you see something that triggers a solution? Or maybe in 2 weeks you already solved it (sorry again 😅)?
Let me know.
No answer in 2 weeks, closing this (but in case is needed I'll reopen it).
Describe the bug
Whenever I set DistributedCacheDuration TimeSpan to any value it gets set to 24h TTL in Redis. My config example is:
"LockTimeout": "00:01:00", "Duration": "00:05:00", "JitterMaxDuration": "00:00:05", "IsFailSafeEnabled": true, "FailSafeThrottleDuration": "00:01:00",
"DistributedCacheDuration": "00:05:00", "ReThrowDistributedCacheExceptions": true
Expected behavior
TTL to be 5mins.
Versions
I've encountered this issue on: