ExHammer / hammer

An Elixir rate-limiter with pluggable backends
https://hexdocs.pm/hammer/
MIT License
749 stars 42 forks source link

Is it necessary to use pool for operate backend ? #30

Closed redink closed 5 years ago

redink commented 6 years ago

I noticed that the Hammer module use poolboy to call backend and I don't think it is a good idea, because:

1, ETS update_counter operation need not worker pool protection, we can use http://erlang.org/doc/man/ets.html#update_counter-4

2, It will increase overload to use the pool.

3, Put the ETS operation into GenServer loop will increase the risk of operation timeout, one operation will through twice (actually three times) send the message

 user process ---> pool manager process ---> worker process
     ^                                           |
     |-------------------------------------------√

In particular, it will scan the whole ETS table when using select_delete to prune and delete_buckets, and it is quite possible to trigger a timeout.

4, For Redis backend, it is not necessary either since there is a pool in front of Redis-server in Redis driver.

Thanks for your time.

redink commented 6 years ago

ping ...

JuneKelly commented 6 years ago

Hi! Sorry it took so long to get back to you on this issue.

We introduced the worker pool to alleviate some bottle-neck issues we were seeing when Hammer was used in highly-concurrent scenarios. (see https://github.com/ExHammer/hammer/issues/19)

Basically, without the pool we were bottle-necking all requests through a single process, which is not good. After introducing the pool mechanism we saw some improvements in performance in highly-concurrent scenarios. I can't seem to find the actual figures though, and I was sure I wrote them down in an issue somewhere. We saw most gains with the Redis backend, rather than the ETS backend. The redis driver doesn't do worker-pooling of it's own, and adding this worker-pool implementation meant that we could handle lots of connections to Redis in parallel.

(Edit to clarify: we saw some small improvement in performance with pooling of the ETS backend, but the real gains were with the Redis backend)

Additionally, the worker-pool allowed us to start arbitrary numbers of different (or the same) backends, with all sorts of different configurations, which had been a feature-request for a long time.

So, for all those reasons, I don't think we'll be removing the worker-pool. Has the pool been causing any particular issues for you? Perhaps we could focus on solving those problems?

redink commented 6 years ago

The problem is not the pool.

In fact, the pool is very useful in most cases. BUT, the ETS is a special case. So, my point is: give the pool to the backend module.

JuneKelly commented 6 years ago

Ah, I get it now! :D

Is this a problem you're having in production, or a hypothetical? If it's happening in production, does it help to set the pool-size to just 1, and the pool-overflow to 0?

Thinking about this in the moment, I'm not sure about moving the pool to the backend, as that would re-introduce the problem of piping all traffic through a single process. I'll think about it some more.

EDIT: I've just looked at the source-code to check, and the pool timeout is set to 60 seconds. I could increase that default timeout, and make it configurable too. I'll also think about a way to make prune/delete operations not run the risk of timing out.

Incidentally, I figure the table must be enormous if it's taking sixty seconds to prune?

JuneKelly commented 6 years ago

Actually, I think I've spotted a problem with how we handle Prune's in the ETS backend. Will hack up a fix and see if that helps with the slowdowns you've been seeing.