Closed philipgough closed 7 months ago
Actually it looks to be related to https://github.com/mailgun/gubernator/blob/master/global.go#L225 which sets hits to zero in all cases on the broadcast, so when it arrives at the non-owning peer it will never pass the code added by #157 and report over the limit.
@philipgough's findings seem pretty odd. Imagine this scenario:
UNDER_LIMIT
.OVER_LIMIT
, a broadcast from the owner should happen: now we run getLocalRateLimit
inside the owner with a request that has 0/5 OVER_LIMIT
before broadcasting to get the most up-to-date status of that limit: https://github.com/mailgun/gubernator/blob/885519d16624452c0098accf08a4b853cec978f0/global.go#L227UNDER_LIMIT
for some reason. 🤷 I tried to understand why that UNDER_LIMIT
status comes and I failed. 😭
Both leaky bucket and token bucket algorithms show this problem, potentially for different reasons... I don't know.
@Baliedge would you mind taking a look at this if you can spare some time? Not sure if its an issue with understanding the correct usage or a valid problem! TY
I stumbled upon this issue yesterday because I saw in our server logs that, for any given request, 429s were always returned by one specific host in our cluster (the owner host for that request). I spent a couple of hours looking into this and I believe I know what's going on.
I wrote a test (in a private repo, sorry) that does this:
Behavior: GLOBAL, Amount: 1, Limit: 1, Duration: 1 * time.Minute,
. This is a cache miss in peer 1 so it goes to peer 2. Peer 2 gets the hit and should update peer 1 with remaining = 0, status = under_limit
Allowed = true
, Remaining = 0
. time.Sleep(1 * time.Second)
. This gives enough time to broadcast the change from peer 2 to peer 1.Behavior: GLOBAL, Amount: 1, Limit: 1, Duration: 1 * time.Minute,
. This is a cache hit in peer 1Allowed = false
. This assertion fails.I think the reason why the assertion fails is: in algorithms.go
, both algorithms have
rl := &RateLimitResp{
Status: Status_UNDER_LIMIT,
// ...
}
// ...
if r.Hits == 0 {
return rl, nil
}
When the owner of a key is done aggregating all the hits it got and makes a final peek
to GetRateLimits (ie it sends a copy of the request with hits=0)
, we don't want to blindly return whatever we had. We are not inspecting the value of r.Remaining
, which could be zero at this point. So we are not setting the status to OVER_LIMIT from the owners, which means the peers will not see it either.
Either we have to fix this in the algorithms themselves (risky, this code doesn't have unit tests!), or we need to change the code of global.broadcastPeers
so that after doing getLocalRateLimit
we manually inspect the value of Remaining
and if it's zero, we manually set the status to OVER_LIMIT.
Judging by the git blame, this problem has been there for roughly 3 years. But if my judgement is correct then it's odd that it was noticed by two independent reporters with very few days in between. As @philipgough said, the test passes on v2.0.0-rc.32. Which means this has been broken since October 2022 😅
I might have a fix #216
PR https://github.com/mailgun/gubernator/pull/219 was merged recently. Did this fix the issue reported here?
@Baliedge: @philipgough kindly left a PR with a reproduction of the problem. So it would be cool to incorporate the test into the codebase (if it's not already done) and confirm wether the issue is fixed through the test.
@douglascamata I'm not finding any PRs to your name. Could you provide a link?
@Baliedge: as I said, it is @philipgough's PR... mentioned in his first comment in this issue. Here it is: https://github.com/mailgun/gubernator/pull/207
@douglascamata @philipgough: Would you both please review take a look at PR https://github.com/mailgun/gubernator/pull/224?
@Baliedge - sorry I was on vacation for a bit so missed this ping.
I've released v2.4.0 which includes fixes in PRs #219 and #225 that specifically address this problem. We've been running it in production for the last week and I'm satisfied with the results.
Let me know if you have any feedback. Thanks for your help!
I am trying to understand if we have hit a bug/regression or that if this behaviour is somehow expected and I am missing something.
I have a reverse proxy that reaches out to a Gubernator cluster all running on Kubernetes. The requests are load balanced over gRPC on the client side in a round robin fashion to each replica in the Gubernator cluster. I have observed that when a request lands on a replica who does not own a rate limit but fetches result from cache, I always get a status under limit response from that replica. Prometheus metrics show that broadcasts and peering etc are working as I would expect.
An example request might look like
Running for example 30 test requests against two different scenarios, the first which runs a single replica and so we can guarantee the owner, the second with two replicas which should split requests 50/50 in sync fashion will produce the following results:
I believe I have narrowed down the change of behaviour to https://github.com/mailgun/gubernator/pull/157 although I can't exactly explain why.
But running the above scenarios one again in the prior release returns the following, which is what I would expect as correct:
I've added a test case (which expectedly fails) to reproduce this exact scenario in https://github.com/mailgun/gubernator/pull/207 and reverting https://github.com/mailgun/gubernator/pull/157 on top of that PR allows the test to pass.
Hits appears to be 0 for non-owning peers in
algorithims.go
when I step through with the debugger.