Open hughesjs opened 2 years ago
Yes, you understand how it's supposed to work (in theory). There's also an option during configuration to just use the default HttpClient
that you get by calling httpClientFactory.CreateClient()
.
I'm working on an update right now that makes #4 more true.
In general, if you're going to be using this, let me know and I can be a little less "wild west" with my releases. However, it's still pre-production so expect some public API changes from release to release, at least for the minor
versions.
Anyway, I've thought about handling 429's, too. Initially, I decided against since it seems to fall outside the scope of HallPass if it's using a specific algorithm, preferring instead to design it in such a way that using something like Polly to handle retries should be straightforward.
However, I think you have an interesting case that could be another "algorithm" of sorts (right now I call everything a Bucket
). Maybe we'd call it ShotgunInADarkRoomBucket
? I think I know how it would be relatively seamlessly implemented with the existing structure, too... at least for a non-remote (non-distributed) version, but I'd need to change up a couple things first.
Would you have any interest in contributing?
My main use case would be for applications where, on average, you're not going to be making enough requests to hit the rate limit - so it's beneficial to send requests immediately - but in some circumstances you might - so you need to back off and wait at that point.
I'm open to contributing, although I'd like to add I'm pretty slammed between work and my PhD right now so I won't have a huge amount of time I could dedicate to it.
No worries. Though the more I think about the precise situation you described, the more it sounds like simply using Polly would work very well for you, no?
Does Polly make it possible to inspect the HttpResponseMessage
to know if it's 429 and to then grab the retry-after
header to pause for that amount?
Btw, maybe I wasn't clear how HallPass works, but it usually would allow bursts, as well.
If you configured your rate limit with something like this, it should burst immediately and then block any additional requests past 60 until you have capacity again:
builder.Services.AddHallPass(config =>
{
config.UseLeakyBucket(
"yourTargetUri",
rate: 60,
frequency: TimeSpan.FromMinutes(1),
capacity: 60);
});
Are you using HttpClient
directly or indirectly to call the API?
You can... But it's a mess
https://github.com/App-vNext/Polly/issues/414#issuecomment-371932576
Ahh, that's not a huge mile off what I want to be honest, it just might not line up perfectly with the retry-after
but that's no biggy...
I'm using it indirectly in actual application but it's via an SDK that I control so I can easily make changes.
Even if you're using it indirectly, you should still be able to configure it to use HallPass if the SDK uses the default HttpClient
.
builder.Services.AddHallPass(config =>
{
// this makes it so that you can do: 'var httpClient = _httpClientFactory.CreateClient()' and still have it metered by HallPass.
config.UseDefaultHttpClient = true;
config.UseLeakyBucket(...);
});
...
// then this should still be impacted by HallPass
var result = await mySdk.DoSomethingWithHttpClientUnderTheHoodAsync();
That said, once I get around to writing documentation, I plan on saying that the optimal approach to using it is:
It's possible that HallPass will incorporate 429 handling in the future, but for now, Polly does it very well itself (though yes, it would be great if there was a one-liner or something...)
Hey dude, saw this project on a StackOverflow answer and it looks pretty promising!
I've not had a chance to look through the code thoroughly yet, but having read the readme, I think I'm right in saying it works like this (apologies if I'm wrong in this):
I'm working on an SDK for the ESA's DISCOSweb API which has a rate limit of 60 requests per minute.
This is more than adequate for most applications but since you can only fetch data from certain endpoints in pages, if you want to get the complete dataset you might blow past that pretty quickly. If you do, you get a
429 TOO MANY REQUESTS
response. Crucially though, this response also contains aretry-after
header which indicates the seconds until this rate limit will reset.I think it would be quite nice if there was a policy that didn't actually track the number of requests made, or directly limit their rate, instead, it sends them immediately until such point a
429
is received. At which point it would queue the requests up until theretry-after
had expired.What do you think?