Open tobz opened 2 weeks ago
These are great questions - that this is unclear at all is a docs bug, at least.
The main method that code is supposed to use is earliest_possible
- the method is meant to return the timestamp you need, and that timestamp should be the type that corresponds to the Clock in use. From what you're writing, it sounds like that is not the case for you?
E.g., in this code:
#[test]
fn default_direct() {
let clock = governor::clock::DefaultClock::default();
let limiter: DefaultDirectRateLimiter =
RateLimiter::direct_with_clock(Quota::per_second(nonzero!(20u32)), clock);
match limiter.check() {
Ok(_) => (),
Err(nu) => { // here, `nu.earliest_possible()` returns a `Quanta::Instant`.
}
}
}
You're right that you need a quanta clock structure with the default clock. The secret that the docs don't tell you (but should!) is that governor only tells a difference in concrete instances any Clock in the FakeRelativeClock
structure (for testing) and in QuantaUpkeepClock
(for more coarse-grained timings where speed really really matters). Approximately all the other clocks accessible by default implement Default::default
so you can get a clock instance if you should need it.
If all that is too annoying to manage and you don't need the microseconds of latency per measurement, you can use a MonotonicClock
rate limiter, which uses std::time::Instant
and not worry about managing the clock at all... if you disable the quanta
feature in the crate, that should be the default clock, also.
Please let me know if that helps!
Yeah, this is what I ended up doing for now... but my complaint is really more about the ergonomics of it all.
It seems like RateLimiter
ought to at least expose a reference to the clock it's using, or alternatively, change NotUntil::wait_time_from
to be like.. NotUntil::earliest_possible_duration
? which takes no arguments and just requires that the middleware creating it passes in the current time so it can do the math for you without having to manually deal with the clock at all.
Essentially, I expect NotUntil
to be able to just give me a Duration
that I should wait before trying again, without having to think about the clock at all.
Yep, that's fully valid. I just realized that back in April I merged #232, which adds a RateLimiter::clock
method but didn't cut a release! Massive oops on my part.
I'll get the release cut now, hope that helps with your immediate issue at least.
OK, 0.6.4 is out now with a clock
method on RateLimiter (and now I remember what kept up that release, the gh action to do so had a bug).
I'll leave this issue open - feel free to post the code you end up with, I'd love to integrate that in an example.
👍🏻
I ended up removing the rate limiting code to simplify some things during development, but I'll definitely swing back around and try things out with 0.6.4. I had also encountered some issues with limiting behavior at higher throughput rates, so I might have another issue to open. 😅
Note that #245 exposed a semver incompatibility: I had to yank 0.6.4 and re-release the change as 0.7.0. No semantic difference except in version number (:
I was trying to use this crate to implement rate limiting in a synchronous context, and using the default direct rate limiter, where a negative decision hands back
NotUntil<P>
so the caller can interrogate when a conforming decision is likely to be able to be made.OK, cool... and there's
NotUntil::earliest_possible
andNotUntil::wait_time_from
, but as it turns out, they're useless from outside ofRateLimiter
itself unless you also manage a clock yourself.NotUntil::earliest_possible
gives you a quasi-Instant
which needs another quasi-Instant
from which to derive aDuration
of how long to wait, and likewise,NotUntil::wait_time_from
needs the same thing.RateLimiter
does this by just callingself.clock.now()
to get thatfrom
parameter required forNotUntil::wait_time_from
... but there's no way to actually access the clock being used byRateLimiter
.Thus, in the end, you have to manage your own clock and keep it side-by-side with
RateLimiter
just to do anything in a synchronous context, which seems like a big ergonomics pitfall.