Closed isker closed 1 week ago
Ah, but the cache key itself (for both backends) includes the current time integer-divided by the unit time:
https://github.com/envoyproxy/ratelimit/blob/ca55e1b31d023d8b8aedb8a21ca3b769fb632e95/src/limiter/cache_key.go#L73-L74
So there's no actual danger of the situation I described above happening. But I do still wonder if the redis client is wasting memory by needlessly effectively doubling the worst case key expiration time by not using NX
.
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.
Apologies if I'm just misreading the code, but:
https://github.com/envoyproxy/ratelimit/blob/ca55e1b31d023d8b8aedb8a21ca3b769fb632e95/src/redis/fixed_cache_impl.go#L35-L38
It looks like the redis backend is restarting the countdown to key expiration every time the key is incremented. This seems bad: once a key is over the limit, if the key is incremented more frequently than its
expirationSeconds
(which seems to come straight from the configuredrate_limit.unit
), all requests will be rate limited forever.Shouldn't the
NX
option be used to ensure that the limit is properly reset?From what I can see, the memcache backend does not have this problem, because it only sets an expiration when the key is first initialized (the protocol actually wouldn't permit anything else): https://github.com/envoyproxy/ratelimit/blob/ca55e1b31d023d8b8aedb8a21ca3b769fb632e95/src/memcached/cache_impl.go#L165-L170