Closed oliveigah closed 2 years ago
@josevalim Thanks for the feedback!
Just finished the requested changes, let me know if you see some other way I can improve the feature.
Also, I've added a disclaimer section on the docs related to the known implementations issues on the PR description.
Gonna remove them on the follow up PRs.
@josevalim @ericmj @xinz
Thank you for the feedback! I've just added a commit that tackles the problem:
On large pools, if many resources goes idle at the same cycle you may end up terminating a large amount of workers sequentially, what could lead to the pool being unable to fulfill requests.
I've added a pool configuration option called :max_idle_pings
which defines a limit to the number of workers that can be pinged for each cycle of the handle_ping/2
optional callback. Defaults to nil
, which is no limit.
There are two behaviours I think it is important to keep in mind:
If you are not terminating workers with handle_ping/2, you may end up pinging only the same workers over and over again because each cycle will ping only the first :max_idle_pings workers
If you are terminating workers with handle_ping/2, the last worker may be terminated after up to worker_idle_timeout + worker_idle_timeout * ceil(number_of_workers/max_idle_pings)
instead of 2 * worker_idle_timeout - 1
milliseconds of idle time.
Let me know what you think! :smiley:
@josevalim Thanks for the feedback. Just finished the suggested improvements.
Let me know if you see some other way to make it better!
:green_heart: :blue_heart: :purple_heart: :yellow_heart: :heart:
https://github.com/dashbitco/nimble_pool/issues/1
Known problems with the implementantion:
On lazy pools, if no worker is currently on the pool the callback will never be called. Therefore you can not rely on this callback to terminate empty lazy pools.
I'll make a follow up PR addressing this issue.
On not lazy pools, if you return
{:remove, user_reason}
you may end up terminating and initializing workers at the same time every idle verification cycle.I don't know exactly how to address this, probably will just add it to the docs.
On large pools, if many resources goes idle at the same cycle you may end up terminating a large amount of workers sequencially, what could lead to the pool being unable to fulfill requests.
I'll make a follow up PR addressing this issue, implementing something to stop the idle iterations earlier using some pool configuration probably.