Closed georgiosd closed 2 years ago
There's a draft PR open here: #666
I ported it to various production code bases from source (https://github.com/App-vNext/Polly/pull/666#issuecomment-622961352) and it's been happily running for about 8 months for the use case we needed it for.
I'd suggest taking a look at the draft and seeing if it would work for your use case(s), and if not add feedback to the draft.
Thanks @martincostello I've commented on the PR.
@martincostello,It would be good to know what use-cases you are using this for?
Thanks, Sean.
From: madelson @.> Sent: 17 March 2021 11:47 To: App-vNext/Polly @.> Cc: Sean Farrow @.>; Mention @.> Subject: Re: [App-vNext/Polly] Feature request: rate-limiting (requests/time) (#260)
Thanks @martincostellohttps://github.com/martincostello I've commented on the PR.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/App-vNext/Polly/issues/260#issuecomment-801018541, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AALDK7UUSYPJRDHVYULMRZDTECJDZANCNFSM4DPR75HQ.
@SeanFarrow We use it to throttle per-user requests to particular code paths, such as "expensive" queries, or to prevent spam from "abusive" client application requests.
@SeanFarrow if it is helpful we are using this technique in ad-hoc ways (but not currently through Polly) in a couple of scenarios:
For anyone wondering when this is likely to be released, just saw on the PR: https://github.com/App-vNext/Polly/pull/666#issuecomment-891118053 (4 days ago at time of typing this):
This feature is currently on-hold for an indefinite period due to the author's personal commitments making them unable to further pursue it.
So it hasn't been forgotten, life has just got in the way. Hope everything is alright with @reisenberger.
I require this functionality, so I pulled the branch down, built it into a NuGet package and started testing it. And I've encountered some weirdness. Here's my code:
Startup.cs ConfigureServices:
services
.AddHttpClient<IMyClient, MyClient>(c => c.BaseAddress = new Uri("https://my-service.my.domain/"))
.AddPolicyHandler(Polly.RateLimit.Policy.RateLimitAsync<HttpResponseMessage>(
60, TimeSpan.FromMinutes(1)
));
Client:
public interface IMyClient
{
async Task<SomeResponseModel> CallHttpEndpointAsync();
}
public class MyClient : IMyClient
{
private readonly HttpClient _httpClient;
public MyClient(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task<SomeResponseModel> CallHttpEndpointAsync()
{
var response = await _httpClient.GetAsync("some/endpoint");
var result = response.IsSuccessStatusCode ? await response.Content.ReadAsAsync<SomeResponseModel>() : default;
return result;
}
}
Test:
var duration = TimeSpan.FromMinutes(1); // same as in Startup
var requests = new Func<Task<bool>>[600];
for (var i = 0; i < requests.Length; i ++)
{
requests[i] = async () =>
{
try
{
await _iMyClientInstance.CallHttpEndpointAsync();
// if the call was made, we weren't rate-limited
return true;
}
catch (RateLimitRejectedException)
{
// if the call threw this specific exception, we were rate-limited
return false;
}
catch (Exception ex)
{
_logger.LogError(ex, ex.Message);
// non-rate-limited exceptions aren't something we can deal with
throw;
}
};
}
var durationSeconds = (int)duration.TotalSeconds;
var perSecond = requests.Length / durationSeconds;
var requestsBySeconds = requests.Chunk(perSecond).ToArray();
var iterationTimer = new Stopwatch();
bool[] perSecondSuccesses;
var perSecondSuccessCounts = new int[durationSeconds];
var oneSecond = TimeSpan.FromSeconds(1);
var totalTimer = Stopwatch.StartNew();
for (var i = 0; i < durationSeconds; i ++)
{
iterationTimer.Restart();
perSecondSuccesses = await Task.WhenAll(requestsBySeconds[i].Select(async req => await req()));
perSecondSuccessCounts[i] = perSecondSuccesses.Count(success => success);
iterationTimer.Stop();
var waitDuration = oneSecond - iterationTimer.Elapsed;
if (waitDuration.TotalMilliseconds > 0)
{
await Task.Delay(waitDuration);
}
}
totalTimer.Stop();
var totalRequestsPerDuration = (double)perSecondSuccessCounts.Sum();
// the total loop iteration time will slightly exceed the requested duration due to overhead, so normalise back
var actualRequestsPerDuration = totalRequestsPerDuration
/ totalTimer.Elapsed.Ticks
* duration.Ticks;
Essentially I've defined a typed HttpClient with a rate limit of 60 requests per minute, which I'm testing by creating a list of 600 Task
delegates encapsulating a call to said HttpClient. I then split them into 60 buckets of 10 requests each and execute all tasks in each of these buckets every second (as opposed to trying to fire off 600 requests instantly). For every bucket, I keep track of how many of those requests were issued versus how many were not (the latter being blocked by the rate-limiting code in the linked PR) and sum up the actual executions after the loop has completed. My expectation is thus that of the 600 requests, at maximum 60 will actually be issued.
The problem is that this appears to be rate-limiting far more aggressively than it should. I'd expect totalRequestsPerDuration
to be in the 50s, but in actual operation it never exceeds 30. I'm almost certain that the issue is with my testing code not the Polly code - can anyone give me a suggestion as to what's going wrong here?
Note that the Task.WhenAll
wrapping the HTTP calls is (to my mind) not the issue - the loop iteration timer never exceeds 1 second.
The PR to implement this feature has moved to #903 - if you have any feedback on the proposed implementation, please direct any comments there.
@martincostello this can be closed. #903 is merged.
I know, I merged it 😄
I was intentionally leaving it open until the next release is available from NuGet.org.
Looks like it went up about 5 minutes ago as part of v7.2.3 🚀 (thanks @joelhulen).
No, thank YOU, @martincostello!
Hey, can this work in a scaled out scenario ? right now we have a single server which is taking care of calling the external services using a singleton rate limiter and we are running into perf issues.
@madhugilla Please create a new issue with more details of what you're experiencing and we can look into it.
I require this functionality, so I pulled the branch down, built it into a NuGet package and started testing it. And I've encountered some weirdness. Here's my code:
Startup.cs ConfigureServices:
services .AddHttpClient<IMyClient, MyClient>(c => c.BaseAddress = new Uri("https://my-service.my.domain/")) .AddPolicyHandler(Polly.RateLimit.Policy.RateLimitAsync<HttpResponseMessage>( 60, TimeSpan.FromMinutes(1) ));
Client:
public interface IMyClient { async Task<SomeResponseModel> CallHttpEndpointAsync(); } public class MyClient : IMyClient { private readonly HttpClient _httpClient; public MyClient(HttpClient httpClient) { _httpClient = httpClient; } public async Task<SomeResponseModel> CallHttpEndpointAsync() { var response = await _httpClient.GetAsync("some/endpoint"); var result = response.IsSuccessStatusCode ? await response.Content.ReadAsAsync<SomeResponseModel>() : default; return result; } }
Test:
var duration = TimeSpan.FromMinutes(1); // same as in Startup var requests = new Func<Task<bool>>[600]; for (var i = 0; i < requests.Length; i ++) { requests[i] = async () => { try { await _iMyClientInstance.CallHttpEndpointAsync(); // if the call was made, we weren't rate-limited return true; } catch (RateLimitRejectedException) { // if the call threw this specific exception, we were rate-limited return false; } catch (Exception ex) { _logger.LogError(ex, ex.Message); // non-rate-limited exceptions aren't something we can deal with throw; } }; } var durationSeconds = (int)duration.TotalSeconds; var perSecond = requests.Length / durationSeconds; var requestsBySeconds = requests.Chunk(perSecond).ToArray(); var iterationTimer = new Stopwatch(); bool[] perSecondSuccesses; var perSecondSuccessCounts = new int[durationSeconds]; var oneSecond = TimeSpan.FromSeconds(1); var totalTimer = Stopwatch.StartNew(); for (var i = 0; i < durationSeconds; i ++) { iterationTimer.Restart(); perSecondSuccesses = await Task.WhenAll(requestsBySeconds[i].Select(async req => await req())); perSecondSuccessCounts[i] = perSecondSuccesses.Count(success => success); iterationTimer.Stop(); var waitDuration = oneSecond - iterationTimer.Elapsed; if (waitDuration.TotalMilliseconds > 0) { await Task.Delay(waitDuration); } } totalTimer.Stop(); var totalRequestsPerDuration = (double)perSecondSuccessCounts.Sum(); // the total loop iteration time will slightly exceed the requested duration due to overhead, so normalise back var actualRequestsPerDuration = totalRequestsPerDuration / totalTimer.Elapsed.Ticks * duration.Ticks;
Essentially I've defined a typed HttpClient with a rate limit of 60 requests per minute, which I'm testing by creating a list of 600
Task
delegates encapsulating a call to said HttpClient. I then split them into 60 buckets of 10 requests each and execute all tasks in each of these buckets every second (as opposed to trying to fire off 600 requests instantly). For every bucket, I keep track of how many of those requests were issued versus how many were not (the latter being blocked by the rate-limiting code in the linked PR) and sum up the actual executions after the loop has completed. My expectation is thus that of the 600 requests, at maximum 60 will actually be issued.The problem is that this appears to be rate-limiting far more aggressively than it should. I'd expect
totalRequestsPerDuration
to be in the 50s, but in actual operation it never exceeds 30. I'm almost certain that the issue is with my testing code not the Polly code - can anyone give me a suggestion as to what's going wrong here?Note that the
Task.WhenAll
wrapping the HTTP calls is (to my mind) not the issue - the loop iteration timer never exceeds 1 second.
When debugging and inspecting the internal RateLimitPolicy, I noticed something called: Interval. I guess here it means TimeSpan
/numberOfExecutions
. Let' say, set 3 execution per second, so the interval is 333.333ms, thus it will use the evenly distributed time slot to determine if it exceeds.
In your sample code, await Task.WhenAll(requestsBySeconds[i].Select(async req => await req()))
will trigger all the Async method in short period of time and may run out the only 1 available bucket per slot for some of them.
Yep, that's why. See https://github.com/App-vNext/Polly/wiki/Rate-Limit#allow-for-bursts
Hi :)
I'm enjoying Polly very much, good work!
Would love something like this: http://www.jackleitch.net/2010/10/better-rate-limiting-with-dot-net/ that works better than that :)
I've used that RateGate to connect to APIs that don't ban you if you send more than x requests in y seconds and it doesn't work too well, it leaks here and there.