Closed tanner0101 closed 1 year ago
I am using something similar in a personal project. My pools have min and max sizes. They can grow up to the min sizes, and connections in excess of the min size that have not been used for a while automatically get released.
Not sure if I misunderstand, but at that point, what's the point of having a max size at all. Making it fully leaking would mean there is only a min
, not a max. Correct?
Also, this would not fix the deadlock issue, it'd just take you longer to run into it (given you do still have some kind of max). If you have no max at all, at some point you would just exhaust your database
Personally, I am not a fan of letting the app create an unlimited number of connections. Postgres does not support that many parallel connections being open at a given time, so one service opening too many connections could impact all services depending on that DB. So you are going to have a maximum number of connections either way, and I prefer to define that manually based on expected load rather than having less visibility into that when it is implicitly defined by the DB's scalability.
IMHO a min/max pool offers the best of both worlds:
@MrMage I got into the same situation, I have a swift application deployed on K8s with many pods running and I'm getting into the MAX number of connections Postgres can have open. Here is a use-case: For each pod I define the max number of connections, then I have a spike in load and the async kit properly scales up, but then the number of connections to the DB get increased. Then a new request comes in BUT another pod is used that doesn't have enough connections and when trying to get one it notices that it reached the DB allowed limit.
To overcome this issue I have changed the max_connection in Postgres to a big number, like 500, which is a dirty hack. Ideally, like you said, the connections should be closed after not being used for a while with a min number of connections open.
You mentioned you have a solution, can you please share? Thx,
Here's what I am using with Vapor 3: https://gist.github.com/MrMage/6fe071f405ac2c1b8a4cde25f05061db
Feel free to use this however you please; it might need some or a lot of tweaking to work with a vanilla Vapor 3 set up, however. Also, no guarantees of correctness or suitability for a particular purpose, of course.
This works well for me because I don't have that many Pods, and many of them will never receive that many requests, anyway. If your requests are scattered across a ton of Pods, you might still into trouble with this. What could help would be to reduce the min/max connections for Pods that don't need many connections, and in general increase the prune frequency.
But if even that's not enough, you might need to use a Postgres connection pool software like e.g. pgBouncer.
@MrMage thank you. The code looks exactly in the direction needed. I was thinking of simple specifications instead of the min/max, I could simplify and just use the connection "idle time". So if a connection is not used for let's say 5 minutes to close it, even if the pool remains without any opened connections. Then it's up to the one using the pool to tweak this number. The idea is that the library client will have to juggle between the memory used by keeping the connection opened, the max connections of the db and the time it takes to create another connection.
@crarau whatever works for you :-) My implementation already closes connections that are idle for longer than config.idleTimeForPruning
, and it automatically tries to re-use the "freshest" connections; if it always used the "most stale" connections, none of the connections would ever exceed the idle time threshold. So if you simply set minResources
to 0, this might do what you need. You could also set this to 1, for slightly improved "cold start" latency, while still keeping the "baseline" resource usage low. You could probably also reduce pruneInterval
to e.g. 5 seconds; it shouldn't be too resource-intensive.
Closing this out as not planned; it will be superseded by other work already in progress elsewhere (this package is probably going away at some point).
Consider supporting "leaky" connection pooling where requests for connections beyond the pool's maximum limit will create temporary connections.
See https://gitlab.com/Mordil/RediStack/-/merge_requests/116/diffs#cba55883b2f54457e6530e8a988688bc63337457_0_80