dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
15.25k stars 4.73k forks source link

Add atomic increment/decrement operations to IDistributedCache #36386

Open khellang opened 6 years ago

khellang commented 6 years ago

It would be really nice to have for scenarios like rate limiting.

I don't know about SQL Server support for this, but I guess this would map to Redis' INCR and DECR operations. If the key does not exist, it would be set to 0 before performing the operation. The methods would return the value of key after increment/decrement.

I propose the following new members:

namespace Microsoft.Extensions.Caching.Distributed
{
    public interface IDistributedCache
    {
        // Existing members:
        byte[] Get(string key);
        Task<byte[]> GetAsync(string key, CancellationToken token = default(CancellationToken));
        void Set(string key, byte[] value, DistributedCacheEntryOptions options);
        Task SetAsync(string key, byte[] value, DistributedCacheEntryOptions options, CancellationToken token = default(CancellationToken));
        void Refresh(string key);
        Task RefreshAsync(string key, CancellationToken token = default(CancellationToken));
        void Remove(string key);
        Task RemoveAsync(string key, CancellationToken token = default(CancellationToken));

        // New members:
+       long Increment(string key, DistributedCacheEntryOptions options);
+       Task<long> IncrementAsync(string key, DistributedCacheEntryOptions options, CancellationToken token = default(CancellationToken));
+       long Decrement(string key, DistributedCacheEntryOptions options);
+       Task<long> DecrementAsync(string key, DistributedCacheEntryOptions options, CancellationToken token = default(CancellationToken));
+       long IncrementBy(string key, long value, DistributedCacheEntryOptions options);
+       Task<long> IncrementByAsync(string key, long value, DistributedCacheEntryOptions options, CancellationToken token = default(CancellationToken));
+       long DecrementBy(string key, long value, DistributedCacheEntryOptions options);
+       Task<long> DecrementByAsync(string key, long value, DistributedCacheEntryOptions options, CancellationToken token = default(CancellationToken));
    }
}

@Eilon @Tratcher @davidfowl Is this something you'd consider for 3.0?

khellang commented 5 years ago

Ping!

Eilon commented 5 years ago

This seems like a different service than IDistributedCache, no? (Aside from it's obviously a breaking change.)

khellang commented 5 years ago

This seems like a different service than IDistributedCache, no?

You mean because you operate on numbers instead of bytes? I dunno, it's pretty common to cache "counters" like this.

slang25 commented 5 years ago

When I think distributed caching, I don't think of atomicity. What you are describing sounds more like a distributed semaphore (which consul implements for example), which does seem like it belongs to a separate interface.

khellang commented 5 years ago

I don't really care where the methods live, I just want to be able to invoke a Redis INCR operation 😝

slang25 commented 5 years ago

Yeah, fair point. Why not do that rather than wait for an abstraction:

var redis = StackExchange.Redis.ConnectionMultiplexer.Connect("localhost");
var db = redis.GetDatabase();
db.*Increment(...

You already a transitive dependency on StackExchange.Redis.Signed from the distributed caching extension.

khellang commented 5 years ago

Because I'm writing a middleware that I'd like to keep decoupled from Redis. Because ASP.NET Core already has the IDistributedCache abstraction, I thought it would be a good fit.

davidfowl commented 5 years ago

It's not a great fit, at least not for this interface.

analogrelay commented 5 years ago

We don't intend to support this behavior in the IDistributedCache.

khellang commented 5 years ago

So, for implementing throttling/rate limiting, what kind of service would you recommend I use to store the number of requests for the current IP, without forcing all my users to use Redis? Does it exist? 🤔

This is straight forward with caching primitives for most (other) platforms...

analogrelay commented 5 years ago

Can you tell me more about the caching primitives other platforms provide? I'm certainly not opposed to this if there's prior art. Redis has an INCR primitive, but that's not necessarily something an abstraction can depend upon. Increment is also very difficult (if not impossible) to implement on top of our existing abstraction (i.e. in terms of Get/Set) since it requires some kind of transactionality, so it would be a breaking change that would force all providers to find a way to do this.

It sounds like you're basically looking for a "database" (for lack of a better term) here rather than a cache, since you're storing data that you're relying upon. Caches generally are targeted at transient data storage backed up by some underlying store. What would happen to the throttling data when the cache drops your key as part of scavenging/expiration? Redis (as an implementation) is more of a database, so using it directly for this makes some sense, or providing your own abstraction in your middleware component.

I'm not necessarily opposed to having an abstraction for this, just trying to understand how this fits into the area of "caching".

khellang commented 5 years ago

Can you tell me more about the caching primitives other platforms provide?

Laravel

Laravel provides an expressive, unified API for various caching backends. Laravel supports popular caching backends like Memcached and Redis out of the box.

The increment and decrement methods may be used to adjust the value of integer items in the cache. Both of these methods accept an optional second argument indicating the amount by which to increment or decrement the item's value.

Rails

Has an abstract cache store class with implementations for in-memory, Memcached and Redis out of the box.

Has both an increment and decrement method, both taking an optional amount.

Django

Django comes with a robust cache system that lets you save dynamic pages so they don’t have to be calculated for each request. For convenience, Django offers different levels of cache granularity: You can cache the output of specific views, you can cache only the pieces that are difficult to produce, or you can cache your entire site.

You can also increment or decrement a key that already exists using the incr() or decr() methods, respectively. By default, the existing cache value will be incremented or decremented by 1. Other increment/decrement values can be specified by providing an argument to the increment/decrement call.


This was just the first three I looked at, in ten minutes on my phone. They're also some of the most popular web frameworks out there. All of them expose increment and decrement operations directly in their caching primitives. The most popular stores like Memcached and Redis support these operations natively.

Redis has an INCR primitive, but that's not necessarily something an abstraction can depend upon.

Why not? 🤔

Increment is also very difficult (if not impossible) to implement on top of our existing abstraction (i.e. in terms of Get/Set) since it requires some kind of transactionality, so it would be a breaking change that would force all providers to find a way to do this.

Yes, tell me about it. It works somewhat, but has no transactional guarantees. Hence this suggestion. I'm not saying that must be a guarantee, but it would be nice to have if the backend support it.

It sounds like you're basically looking for a "database" (for lack of a better term) here rather than a cache, since you're storing data that you're relying upon. Caches generally are targeted at transient data storage backed up by some underlying store.

No, I'm looking for a cache. I know what a cache is. What is a cache if not a "database"? The most popular caches support these operations.

What would happen to the throttling data when the cache drops your key as part of scavenging/expiration

That's also a key point here. You want to be able to reason about expiry etc. as well. In the throttling example you'd like to have a sliding expiration so whenever someone makes a request, it gets incremented and the expiration is reset. When the expiration goes out, the counter is removed (reset).

Redis (as an implementation) is more of a database, so using it directly for this makes some sense, or providing your own abstraction in your middleware component.

I doubt anyone using any other cache backend would use throttling middleware that only support Redis. Especially when their existing cache already supports what they need (and could be used instead).

Sure, I could roll my own abstraction, but that means I'd have to either leave the implementation up to the end user or ship separate ThrottleMiddleware.Redis, ThrottleMiddleware.Memcached and ThrottleMiddleware.InMemory etc. packages. I'd much rather rely on the existing packages in the shared framework if I could.

analogrelay commented 5 years ago

What is a cache if not a "database"?

Our understanding of caching here has traditionally been that the data in the cache can never be expected to remain there, it could disappear at any time (due to scavenging, expiration, etc.). I could buy that it's up to the consumer to set that policy though.

The most popular caches support these operations.

Yeah, that does seem to be the case. Fair point.

Alright, I can see those points. I guess I'm still not totally clear on how scavenging affects this but I can buy that it's a common enough pattern to be warranted.

We still have to reconcile the breaking change angle, since adding this API would force any caching provider to support it. Perhaps we can do something hacky with default interface members but it would require making a lot of assumptions. It's much too late to do this in 3.0 but I'll reopen and backlog this.

khellang commented 5 years ago

We still have to reconcile the breaking change angle, since adding this API would force any caching provider to support it. Perhaps we can do something hacky with default interface members but it would require making a lot of assumptions. It's much too late to do this in 3.0 but I'll reopen and backlog this.

I'm totally fine with introducing a new interface and do a cast check on the injected IDistributedCache to utilize the new methods if available, like DI's ISupportRequiredService 😉 Then fall back to what I'm currently doing if it's not available.

abhiyx commented 4 years ago

when is this feature getting rolled out

analogrelay commented 4 years ago

@abhiyx at this point we have no scheduled release for this feature. We're in the midst of planning for 5.0, and this will be considered there (it will have to be prioritized against other work we have to plan for 5.0 though, so there are no guarantees :)).

glucaci commented 4 years ago

there are any news for this feature ?

khellang commented 4 years ago

Yeah, I'd still like to get this feature in. What would be the best way to attack this? Submit a PR for an additional interface with the new members? Does it have to go through API review?

davidfowl commented 4 years ago

Everything has to go through API review and it's a breaking change and we haven't even agreed that this makes sense on anything besides a redis implementation.

khellang commented 4 years ago

it's a breaking change

Adding a separate interface that implementations can opt into is a breaking change?

we haven't even agreed that this makes sense on anything besides a redis implementation.

Most other platforms seems to be able to handle this just fine, whether the backing store is in-memory, Redis, Memcached or SQL-based 🤷‍♂️

What can we do to further this proposal?

eerhardt commented 4 years ago

What can we do to further this proposal?

Check out the API proposal review process here: https://github.com/dotnet/runtime/blob/master/docs/project/api-review-process.md. Specifically this part:

Please use this template. The issue should have the label api-suggestion. Here is a good example of an issue following that template.

The current proposal on top of this issue would be a breaking change, which we've decided we can't take. To make progress, can you edit the top proposal to match the template with the newly proposed APIs?

atuls9390 commented 3 years ago

@khellang Any work-around for above-mentioned INCR issue?

khellang commented 3 years ago

Not really. I ended up just using the Redis API directly and not shipping a library to NuGet as it wouldn't be as useful without proper abstractions.

pinkfloydx33 commented 2 years ago

I ended up having to switch my abstractions over to the replacements offered by Foundatio just to get increment/decrement. I'd prefer to use those offered by the framework rather than leek Foundatio into all my projects (though I admit there's definite bonus points for all the other stuff their abstractions offer.).

AlOnestone01 commented 6 months ago

Hi. Are there any updates on this? We would also love to see this being implemented!