Closed abatishchev closed 7 years ago
LazyCache does not accept a delegate for the policy. Can you give a bit more info on why you might want to pass a policy factory rather than just creating a policy that might just be ignored if the cache is hit? Is this related to the use case in Issue #5 ?
This just slows down reads from cache: before read and return a new policy is created and subsequently ignored. More hits have a key - more objects are created and ignored.
This might make sense for keys with expiration: once it has expired just created policy will have used. Buy for just with no expiration it's just pure waste.
Do you have an actual bug/issue caused by the allocations? Being a garbage collected language you would have to be doing some pretty serious performance optimisations for those to be worth the effort to minimise. As we are using MemoryCache under the hood LazyCache would have the same behaviour.
On 4 Oct 2016 15:34, "Alexander Batishchev" notifications@github.com wrote:
This just slows down reads from cache: before read and return a new policy is created and subsequently ignored. More hits have a key - more objects are created and ignored.
This might make sense for keys with expiration: once it has expired just created policy will have used. Buy for just with no expiration it's just pure waste.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/alastairtree/LazyCache/issues/10#issuecomment-251406313, or mute the thread https://github.com/notifications/unsubscribe-auth/AF-5lcwcThz-Fn8qzbzouMyHeIkLaOy3ks5qwmP4gaJpZM4KLvlA .
It's small if is just GetOrAdd(key, valueFactory, new ItemCachePolicy())
. But it can be also GetOrAdd(key, valueFactory, CreateItemCachePolicy())
meaning it can take indefinitely, even go to a database or make an http call. And will do it on every call, and nothing prevents it from doing that. And nothing indicates it won't because it will.
I realize this is yet another serious design flaw of MemoryCache itself. The main one is AddOrGetExisting()
accepting object
rather than Func<object>
which is already perfectly mitigated by LazyCache.
So mitigating one more, by adding Func<ItemCachePolicy>
you would add even more value to LazyCache!
Sorry if this seems painful but I still don't think I understand an actual use case of why the generation of the policy would be expensive? For example why would it require a db/http call in your example?
It doesn't require an external call but it might because nothing prevents it. In my case it's relatively simple, mid-size logic to determine the policy basing on the key and typeof(T)
. But in the same time it's not as cheap and straightforward as new ItemCachePolicy { ... }
.
Fixing one major design flows of the underlying framework after another would be a huge win for LazyCache over raw MemoryCache, effectively making one unusable without another. Like MemoryCache on steroids :)
I just don't believe that replacing the unnecessary allocations of a cache policy is a worthwhile optimisation without any evidence that suggests replacing it with a lamda makes a measurable improvement. In addition the lamda is not "free" and there are some pitfalls with lamdas and closures that suggest caution in their use does have some merit - see https://blog.jetbrains.com/dotnet/2014/07/24/unusual-ways-of-boosting-up-app-performance-lambdas-and-linqs/ for some more info. I am going to close this issue for now, but happy to reopen if you can produce a test case to demonstrate the benefits of your proposal. Thanks for the feedback.
Oh, come on! Calling a method every time is obviously more expensive than single lambda allocation. You can use a delegate if believe that lambda is expensive.
Here's a simple test:
cache.GetOrAdd(key, () => value, CreatePolicy());
ItemCachePolicy CreatePolicy()
{
Thread.Sleep(1000);
return new ItemCachePolicy();
}
And again, there are no much problems with new ItemCachePolicy()
passed to the method comparing to calling a method returning ItemCachePolicy
.
I'm really struggling to understand in what realistic use case someone would have a factory method for their policy that would be sufficiently complex as to warrant this optimisation? Without a use case it feels a bit like premature optimisation.
Hi! First, great little library. I've loved using it in our project and being lazy!
I think I do actually have a real use case for this functionality. I've created a ChangeMonitor
based on the solution proposed here: http://stackoverflow.com/a/22388943/810667
Essentially, every time the change monitor is created, we subscribe to an event handler that can be fired to manually expire the cache. It seems to me that it would be better to not create the ChangeMonitor
instance if we're not creating a new cache entry, and thus saving cycles on the expiration side when the cache is manually expired.
Thoughts?
Also, thanks for the link on lambdas and performance! Very instructive.
Thanks @itslittlejohn glad it helped. Interestingly i have been playing with a "delete everything" type feature. Have a look at this class on a separate branch https://github.com/alastairtree/LazyCache/blob/e270bd33f59e9c94bdaf06d644938521f06e82a9/LazyCache/InMemoryCacheManager.cs
I was thinking of implementing and comparing a "dispose the MemoryCache and create a new one" based solution with a change monitor based solution like yours and seeing which had nicer experience and better performance before merging into master. Change monitors have the advantage that they could also enable a regions feature.
That's great! It looks promising, and I think it would a great optional addition to the library. It seems to me that you could have both.
I propose adding IAppCache.Clear(string regionName = null)
If no region is specified, using the dispose approach would probably make the most sense in terms of performance, although it seems like calling Dispose does invoke the CacheEntryRemovedCallback
delegate if it has a value. Not an issue, but a consideration?
If a region is specified, you'd do the approach similar to what I've implemented and is outlined in the above SO post. When adding an item to the cache, you'd only create a change monitor for it if a region is specified (which would be an additional optional param that would need to be added to a few methods).
Region functionality is something that we need in our application, so if you're open to contributions, I could submit what I have once I get something working. I'd prefer to have it integrated, or at least in the interface.
One caveat is that there seem to be some issues with MemoryCache.Default
and Dispose
. I guess you'd just need to be careful about not mixing MemoryCache.Default
and IAppCache
...
If something like this was implemented internally, then I agree with you that I see little reason for converting the CacheItemPolicy
param to a delegate.
Hi, The problem with the built-in MemoryCache that its
GetOrAdd()
doesn't have an overload acceptingFunc<CacheItemPolicy>
so creates an object on each call even the result is already cached effectively discarding that object on most of the times.Is it a case for LazyCache as well? Can this be addressed by the library? Or this is a built-in problem that can be only solved by the frsmeowke itself?
The proposed API would look like this: