Closed jonzlin95 closed 6 years ago
Update: Some extra info
So I ran into this issue in production today, where we were getting a large # of requests failing in the rate limit checks. I had switched from 4 core machines to 8 core machines last night (wanted to run fewer machines), and this issue hadn't happened yesterday despite having 20% extra traffic from the weekend yesterday.
This meant that while the total load on redis went down, the # of requests going through each server went up by ~60%. I think this tipped us over the limit of what the single Hammer pipeline could handle, though I haven't dug into the Hammer codebase to investigate further.
Extra Info: The way my rate limiters work right now, there's ~5 calls on average for a single rate limit check. 1 call for application rate limit (inspect), 1-2 calls for an endpoint rate limit (inspect), 1 calls for "check" on application, and 1-2 call on "check" for endpoint rate limit. (Done to avoid triggering rate limits on the underlying service).
Right now I have a redis-pool I use for other stuff. Wondering if there's a way to possibly add configuration to Hammer that would support something like this (without forcing Hammer to support pooling internally).
pool_opts = [
name: {:local, :redix_poolboy},
worker_module: Redix,
size: 10,
max_overflow: 0,
]
configs = Application.get_env(:redis_pool, RedisPool) |> Enum.into(%{})
children = [
:poolboy.child_spec(:redix_poolboy, pool_opts, configs.redis_url)
]
Ignore me I’m dumb
I think we're hitting the limits of a single redis connection w/ Redix. Getting a lot of dropped check_rates/inspect buckets. Is there any way to add connection pooling to the backend?