Open gissuebot opened 9 years ago
Original comment posted by andreou@google.com on 2014-03-26 at 05:51 PM
We didn't invest the time to do an API review on this. E.g. finding an agreeable method name, and argument names (I recall I had trouble writing the javadoc for explaining the arguments).
* @param permitsPerSecond the rate of the returned {@code RateLimiter}, measured in * how many permits become available per second. Must be positive * @param maxBurstBuildup the maximum period of time where unused permits are accumulated by the * rate limiter, that can later be handed out with no wait time * @param unit the time unit of the maxBurstBuildup argument */ public static RateLimiter createWithCapacity( double permitsPerSecond, long maxBurstBuildup, TimeUnit unit)
As you can see, especially "maxBurstBuildup" is awkward to describe. It specifies it as a quantity of time, not permits (in the sense "maximum burst of 10 permits"), because then its meaning remains invariant even if the user calls setRate() and switch to a different rate limit.
This allows, for example, the default implementation to always produce a maximum burst of "1 second-worth-of-permits" (the permits that a RateLimiter would produce in a single second, e.g. if the rate is 10qps, then this is a burst of "10 permits"), i.e. something that should be tiny, no matter what the rate limit is.
Another option would be to say, "screw this, let's describe bursts as maxPermits" (this is simpler to explain), and add a warning of the sort, "if you call setRate(...), then the relative size of the burst (in relation to the rate limit) changes".
E.g., a maxBurst of "100 permits" for a rate limit of 0.01qps, would imply that the RateLimiter could allow in a burst the throughput that would otherwise need almost three hours to go through. If the rate limit is 100qps, then this burst would merely allow in a burst the throughput of that would take one second.
So it's a choice between a burst description that remains invariant in terms of time, and a burst description that remains invariant in terms of permits. I thought the first is more useful (to be able to describe "small" and "big" bursts, without even knowing the rate), but the second is certainly easier to describe and understand.
Original comment posted by goo...@tom-fitzhenry.me.uk on 2014-05-29 at 06:08 PM
+1 to being able to configure maxBurstBuildup/capacity.
My usecase I want to rate limit incoming requests to M requests per N seconds, in a sliding window manner. (e.g. 20 requests in 5 seconds. Note that this isn't the same as 4 requests in 1 second. The difference is that the former can build up tokens.)
I believe I could achieve this with RateLimiter.createWithCapacity(M/N, M, TimeUnit.SECONDS).
Is there any progress on exposing this API? I'd like to configure burst too and the underlying implementation seems to support bursting just fine, it's just not exposed.
Thanks!
I am, too, interested in this.
+1
+1
+1
+1
+1
+1
+1
+1
Sorry for the total lack of response here.
For the "sliding window" case, we should probably supply a non-"smooth" RateLimiter
implementation. We have one internally, but it has a granularity of one second. We should figure out whether we can change that or whether we have to provide a separate one with finer granularity.
Wading into RateLimiter
will probably also mean resolving some of the long-standing issues, like how it should be an interface (or at least a "true" abstract class) and how it has a package-private method that will make that tricky.
This would be good to do someday. It's just too big to do as a background task, and it's a lower priority than things like Java 8.
+1
I seem that the solution is on SmoothRateLimiter.SmoothBursty.
I see that maxPermits = maxBurstSeconds * permitsPerSecond;
However maxBurstSeconds is hardcode.
I try to extends RateLimiter and add another constructor with maxBurstSeconds parameter.
public class RateLimiterWithCapacity extends RateLimiter {
public static RateLimiter create(double permitsPerSecond, double maxBurstSeconds) {
return create(SleepingStopwatch.createFromSystemTimer(), permitsPerSecond, maxBurstSeconds);
}
@VisibleForTesting
static RateLimiter create(SleepingStopwatch stopwatch, double permitsPerSecond, double maxBurstSeconds) {
RateLimiter rateLimiter = new SmoothRateLimiter.SmoothBursty(stopwatch, maxBurstSeconds /* maxBurstSeconds */);
rateLimiter.setRate(permitsPerSecond);
return rateLimiter;
}
}
And work well. Can I pull this requests? or raise any risk problem.
It seems that it really doesn't matter whether you pick the time-invariant option or the permit-invariant option. Either will expose the same useful capabilities.
Can you please just pick one and get it included with the public interface?
My preference is for the option with time-invariant semantics. Maybe with new/updated interfaces like the following:
Modifier and Type | Method and Description |
---|---|
static RateLimiter | create(double permitsPerSecond) |
Creates a RateLimiter with the specified stable throughput, given as "permits per second" (commonly referred to as QPS, queries per second). This is equivalent to calling create(1, TimeUnit.SECONDS, permitsPerSecond). | |
static RateLimiter | create(long permitPeriod, TimeUnit permitUnit, double permitsPerPeriod) |
Creates a RateLimiter with the specified stable throughput, given as "permits per period". When the rate limiter is unused, bursts of up to permitsPerPeriod permits will be allowed, with subsequent requests being smoothly limited at the stable rate of permitsPerPeriod. | |
static RateLimiter | create(double permitsPerSecond, long warmupPeriod, TimeUnit warmupUnit) |
Creates a RateLimiter with the specified stable throughput, given as "permits per second" (commonly referred to as QPS, queries per second), and a warmup period, during which the RateLimiter smoothly ramps up its rate, until it reaches its maximum rate at the end of the period (as long as there are enough requests to saturate it). This is equivalent to calling create(1, TimeUnit.SECONDS, permitsPerSecond, long warmupPeriod, TimeUnit warmupUnit) | |
static RateLimiter | create(long permitPeriod, TimeUnit permitUnit, double permitsPerPeriod, long warmupPeriod, TimeUnit warmupUnit) |
Creates a RateLimiter with the specified stable throughput, given as "permits per period", and a warmup period, during which the RateLimiter smoothly ramps up its rate, until it reaches its maximum rate at the end of the period (as long as there are enough requests to saturate it). | |
double | getRate() |
Returns the stable rate (as permits per period) with which this RateLimiter is configured. | |
void | setRate(double permitsPerPeriod) |
Updates the stable rate of this RateLimiter, that is, the permitsPerPeriod argument provided in the factory method that constructed the RateLimiter. | |
long | getPeriod() |
Returns the permitPeriod in nanoseconds with which this RateLimiter is configured. |
+1
+1
+1
+1
I've written a sliding window rate limiter that serves my purposes. I've open sourced the code here. I might add functionality and flexibility in the future, but I don't guarantee it. Feel free to fork/PR. Feedback is welcome :)
This issue has been open for 5 years now?, I'm still using the alternative but it would be nice to have this available via Guava, it would only take a static factory or greater access to such class:
We released a small library for this: https://mvnrepository.com/artifact/com.iterable/guava-bursty-rate-limiter/0.1.0
What makes it a P3 rather than P2 or P1? It should be a fix that takes less than an hour.... @steffenyount 's comment makes great sense.... Should have fixed this when I was still in Google...
If this is not worth doing or has some technical difficulty, can someone explain it? I really don't want to reinvent this or add a less truthy library just for this. I think many people would agree.
@beijunyi Speaking for myself, I initially wanted this feature only to realize shortly thereafter that I really needed more flexibility than this would give.
You're better off using a standalone token-bucket library. I wrote my own fairly quickly (https://bitbucket.org/cowwoc/token-bucket) and you could do the same. You can find a more enterprise version of a token bucket at https://github.com/vladimir-bukhtoyarov/bucket4j/ but for some projects this might be overkill.
Original issue created by kak@google.com on 2014-03-26 at 04:28 PM
See discussion here: https://groups.google.com/forum/#!msg/guava-discuss/h_j4LrW98TE/Kkig916NHxIJ
Dimitri: is there any reason why we originally held this method back from public Guava?