Closed nickspring closed 1 year ago
Hi, I don't think placing a lock there is a good idea.
This serializes all requests within that handler or group, meaning just one request can run at the same time. I have some long running requests which now block everything else.
Yes it should block, otherwise how can you count limit correctly?!
For the standard LimitCounter
the lock in localCounter is enough, like @pkieltyka already said.
If this only causes problems with the redis LimitCounter
the fix should only happen there. Because doing it at the start of the handler has the big drawback of only allowing one request at a time. Depending on how you register the handler this is a very breaking change. For example if you do it like in the README.md now your whole application can just process one request at a time.
Edit: Or the lock has to be released before calling next.ServeHTTP(w, r)
and not after it.
Are you sure that's enough for local counter? I believe that for bigger (than for redis) bunch of simulantous requests local limiter will fail (pass more requests than are specified). Anyway you could send your pull request with configurable option of this.
Yes you are right. I also just noticed there is still a race between Status
and Increment
. This was already discussed here https://github.com/go-chi/httprate/issues/2 and extending LimitCounter
with IncrAndGet
looks like the correct solution.
Yea I agree adding a lock on the handler here is not ideal. The in-memory limit counter part of this package should be fine. But within httprate-redis, it is an issue. Could move locks to the httprate-redis library but may end up in a similar result. Lmk if anyone sees the best path
The answer will likely be a change to the LimitCounter interface in direction of https://github.com/go-chi/httprate/issues/2
here is an improved lock implementation to the current interface. I've done some tests and is accurate during bursts, and avoids having a global handler lock
That should also work for my use case, thanks.
Add mutex lock in Handler because big bunch of simultaneous requests will break the limit easily.