Closed mkArtakMSFT closed 2 years ago
Thanks for contacting us.
We're moving this issue to the Next sprint planning
milestone for future evaluation / consideration. We will evaluate the request when we are planning the work for the next milestone. To learn more about what to expect next and how this issue will be handled you can read more about our triage process here.
Recently I was asked to add response caching in our netcoreapp3.1 microservices using Redis as storage, and find out that there is no extension point to extend the ResponseCachingMiddleware behavior (changing storage, actually), and all similar issues links to this one.
What ResponseCachingMiddleware has (which we wouldn't want to duplicate) is:
What ResponseCachingMiddleware should allow to do is (IMHO):
So, my suggestions are follows:
I couldn't see the benefit you gain if you write brand new middleware.
I can't judge whether these features should be added to the existing response cache library or whether it should be a separate library. But I would say the following features should be a first-class citizen:
Location
options was Server
, this doesn't exist for ResponseCache
.I needed to disable caching when some data are added for some admin profiles. Searched all around and finally was directed to this post. Would appreciate this future to be implemented asap.
I needed to disable caching when some data are added for some admin profiles. [...]
@rahou25 This should be controllable by just setting the appropriate HTTP headers.
If that is not sufficient, you can also just add the Response Caching Middleware conditionally using app.UseWhen()
in your Configure()
method.
FYI/FWIW, @madskristensen has a package that looks like it's doing Output Caching in the traditional sense. https://github.com/madskristensen/WebEssentials.AspNetCore.OutputCaching
Looks like it's depending on MVC though, which I imagine wouldn't be a goal of a new middleware.
And I'm not sure how compatible it is beyond ASP.NET Core 2 without giving that a go.
He did push an update 17 days ago though, so perhaps he's got some thoughts.
@benmccallum Nice find! 😄
Looks like it's depending on MVC though, which I imagine wouldn't be a goal of a new middleware.
Actually, in the repo readme it states one can use it also in other stacks:
Using the EnableOutputCaching extension method on the HttpContext object in MVC, WebAPI or Razor Pages
(bold is mine for emphasis).
Additionally, found another implementation that may achieve the same effect: (haven't tested it myself) https://github.com/speige/AspNetCore.ResponseCaching.Extensions https://www.nuget.org/packages/AspNetCore.ResponseCaching.Extensions
Mentioned by author in SO response: https://stackoverflow.com/a/48822159/198797
Those all take a dependency on MVC though, (at least I think Pages does and MVC/webapi are the same thing in core).
I guess what I meant is that output caching will likely function void of any of that, e.g. based only on middleware so it could cache the result of a middleware that's an inline delegate for all it knows!
We'll see what the team would come up with, but that'd be great for custom middleware that doesn't need all the MVC stuff, like writing a robots.txt for instance :)
Yeah anything we would do would be decoupled from MVC/WebAPI/Razor Pages (yes they're all the same thing) and operate at the middleware/request pipeline level. We'd very likely provide mechanisms to provide metadata that controls the behavior for specific resources from within the context of MVC assets though, similar to what we do today for CORS and authorization, e.g. [OutputCache]
attribute on controllers/action methods, @OutputCache
directive in Razor Pages, .WithOutputCache()
extension method for route handlers, etc.
Take a look at https://github.com/thepirat000/CachingFramework.Redis, it allows specifying tags, so you can invalidate cache by tag. I have two attributes, first to store and read from cache, second to invalidate the cache. I'm sure this won't work for all, but we need a way to invalidate the cache as soon as possible.
sample usage:
[HttpGet(Name = "GetAll")]
[RedisCache(60*60,"Users","All")]
public async Task<IEnumerable<User>> Get()
{
return await _usersRepository.Get();
}
[HttpGet("{userId}", Name = nameof(GetUserDetails))]
[RedisCache(60*60, "Users","Single")]
public async Task<User> GetUserDetails([FromRoute] int userId)
{
return await _usersRepository.Get(userId);
}
[HttpPost]
[ClearRedisCache(StatusCodes.Status201Created, "Users")]
public async Task<IActionResult> AddUser([FromBody] User user)
{
await _usersRepository.Add(user);
var routeValues = new { userId = user.Id };
return CreatedAtRoute(nameof(GetUserDetails), routeValues, user);
}
Is the OutputCaching
something currently looked into for .NET 7?
Any place we can start contributing at?
We are planning on shipping this feature as part of .NET 7. I'll update this thread next week with some initial thoughts in order to gather feedback and contributions.
@sebastienros I built an internal prototype middleware, mostly based on the new HttpLogging
middleware. @martincostello built most of it. A lot of code written there to handle the response body stream could be re-used for the output caching middleware (as I did in my prototype).
This prototyped is tied to "OData", but could be generalized, of course.
@StefanOverHaevgRZ thanks. Happy to use your brain and @martincostello 's (though he might have more than one).
Here is an update with a broad list of features that could be implemented. They won't probably be all be done by RTM, but still listed. It also needs to be prioritized and some design discussions are much needed. Feel free to mention what you think is required by your scenarios and what is non-blocking if not implemented. I don't think it's missing any feature that were discussed here or in other places.
"Optional": means a feature that can be enabled or disabled. Whether the feature should be enabled or disabled by default is open to discussion.
Declaratively configure Razor Pages or controller actions for caching. Policies can be configured and referenced by name.
Define the caching configuration of specific endpoints by code.
The developer has a way to force a response to be cached/not-cached or a cache entry to be bypassed, overriding any decision made by the service.
Cache entries can vary by common value like scheme, hostname, path, query, or any custom property, e.g., culture, region, tenant ...
When enabled, authenticated content should be cached. Cached entries can vary by user or roles.
Based on allow-lists, some requests can be cached even if cookies are set. When configured, cookies and headers can be cached with the content.
When locking is configured, a single request processes a specific resource. If other requests query the same resource, they will be queued.
See cache stampede and thundering herd topics for more details.
When the server returns an error, or a lock is acquired on this resource, the cache can return a stale entry depending on a timeout (grace period).
When configured, an Etag header will be generated for each response. If the "If-None-Match" header is defined, a 304 response is returned.
When not configured, Etag values are not generated and the "If-None-Match" header is ignored.
Etags might be generated on further requests only, to optimize for the first response (pass-through response writing without beffurization).
When an entry is stale and refreshed, if "If-Modified-Since" or "If-None-Match" are set, and the content is identical a 304 will be returned and the cached entry refreshed.
Cached entries can be tagged with custom values to be evicted in group. Usages: invalidate cached entries for a specific user, tenant, culture, path, file type, ...
Cached entries can be purged using a service. No default remote support is provided, and each application can define secure endpoints to allow for purging entries remotely.
However specific store implementation can also have some logic to purge entries overtime based on usage and quotas.
In-memory, filesystem, and hybrid stores (metadata in memory and content on disk) are planned.
Some stores should be able to handle size limits.
The storage for cached entries can be extended by developers by implementing a service and registering it in DI.
Potential implementations: Redis, database, CosmosDB, table/blob storage, sqlite
When configured, AFTs can be extracted from the response and replaced every time a cached entry is returned. This allows pages using forms to be cached.
The feature could be generalized to allow for injecting custom fragments in cached results and enable doughnut caching.
When configured, byte-range requests can be cached and/or served from cached entries.
Logging will provide information on when a cache entry is created or served. Counters will provide statistics on how many requests trigger a hit or miss. Developers can record more information like adding custom headers to responses.
External processes can periodically query the site to warmup specific resources in cache.
Cache the resource when it has been accessed a specific number of times.
Epic created https://github.com/dotnet/aspnetcore/issues/40232
Will it support donut caching? See https://www.devtrends.co.uk/blog/donut-output-caching-in-asp.net-mvc-3 for more information regarding ASP.NET (not core).
Will it support donut caching?
@nfplee The plan by @sebastienros above mentions:
Optional Antiforegy Token substitution
When configured, AFTs can be extracted from the response and replaced every time a cached entry is returned. This allows pages using forms to be cached.
The feature could be generalized to allow for injecting custom fragments in cached results and enable doughnut caching.
So it sounds like: yes, if the team sees enough interest to make that generalization a priority. I agree with the article you linked to that without it, many sites would be prevented from using output caching in the first place. Antiforgery tokens being automatically inserted into many forms by tag helpers, this scenario is even more widespread, so I can't imagine that they would build output caching but miss out on this feature.
I haven't gone through your article yet but I want to reassure you that I am definitely intending to see how we could implement what you need. Thanks for sharing.
Output caching was merged and will be in .NET 7 Preview 6: https://github.com/dotnet/aspnetcore/pull/41037
Not all the features mentioned here have made it in yet, but please try it out and file new issues with feedback/suggestions.
Closing this issue since the feature has shipped.
Summary
Response caching is wrong, we wrote response caching people want output caching. We sniff headers when app want to say cache/nocache regardless of headers.
It’s less customizable now that pubternal is gone
People with more context
@halter73, @Tratcher, @JunTaoLuo, @DamianEdwards
Motivation and goals
The existing response cache was designed to implement standard HTTP caching semantics. E.g. it cached based on HTTP cache headers like a proxy would (and may be useful in YARP). It also honors the client's request cache headers like using no-cache to bypass the cache.
An output cache is a different beast. Rather than checking HTTP cache headers to decide what should and should not be cached, the decision is instead made directly by the application, likely with a new request Feature, attributes, HttpResponse extension methods, etc. This cache also does not honor any request cache headers from the client. The client isn't supposed to know it's receiving a cached response.
We should create a new caching middleware that's explicitly designed for output caching. Leave the old one alone, it's fine for what it was designed for, but it doesn't cover the output cache scenarios. Trying to combine the two would just get messy, better to have a clean distinction. Note we can probably copy some of the infrastructure from the existing middleware around intercepting responses and storing them, but the public surface area will be fairly different.
There was one output caching style feature that was added to the response caching middleware later, that should be copied/moved(?). It's the VaryByQueryKeys feature on
ResponseCachingFeature
.In scope
A list of major scenarios, perhaps in priority order.
Out of scope
Scenarios you explicitly want to exclude.
Risks / unknowns
How might developers misinterpret/misuse this? How might implementing it restrict us from other enhancements in the future? Also list any perf/security/correctness concerns.
Examples
Give brief examples of possible developer experiences (e.g., code they would write).
Don't be deeply concerned with how it would be implemented yet. Your examples could even be from other technology stacks.