Closed pcorey closed 6 months ago
Ah thank you! I see you have implemented your suggestion in your fork. Can you confirm that fixed this for you?
@sneako I'm waiting on a deploy window to fully test my fix. I should know by the end of the week. If it looks good, I'll submit a PR!
@sneako After watching my fix in prod for a while, I'm noticing that in_use_connections
is still increasing above the max pool size, and available_connections
is falling well below zero. So, I think I'm missing other cases. I'll put some more attention towards this, but do you have any ideas of what might be missing?
@sneako After watching my fix in prod for a while, I'm noticing that
in_use_connections
is still increasing above the max pool size, andavailable_connections
is falling well below zero. So, I think I'm missing other cases. I'll put some more attention towards this, but do you have any ideas of what might be missing?
I don't have any ideas off the top of my head but I can try to take a look when I have some extra time.
Does it seem like there was any improvement at all with this change?
@sneako Turns out I was just misreading the metrics. It looks like the available_connections
has stayed stable around zero, and available_connections
has stayed stable around my pool size. I think we're good! Submitting a PR shortly.
I'm using finch
0.17.0
working with HTTP1 connections and a pool configuration like this:Finch.get_pool_status
is reporting anin_use_connections
count that increases over time, eventually surpassing my pool size, which seems impossible.I'm guessing it's because I'm configuring a
conn_max_idle_time
, and any connections that exceed that limit when being checked out are closed, but that closure isn't being recorded in the pool metrics.