Closed brennanjl closed 1 month ago
Once I get some feedback on this I will open the same PR to main and v0.9. I put it here so that Raffael can test it (they still use preview)
"Crashes the server" refers to the OOM issues with the postgres process, or kwild is crashing?
@jchappelow postgres was initially halting the instance for many minutes with too much memory usage, then I lowered the docker container limit to just be killed and restarted instead
@jchappelow postgres was initially halting the instance for many minutes with too much memory usage, then I lowered the docker container limit to just be killed and restarted instead
Gotcha, thanks for clarifying.
LGTM, except that we should probably enforce a minimum, and I have a feeling 3 or 4 is more practical. We'd have to go through and count, but I think we make some read connections in a number of places, and i would hate for there to be a deadlock somehow if the circumstances are just right.
A minimum of 2 is enforced, but we can raise it
From @outerlook via Slack:
This PR adds a
app.db-max-connections
flag which specifies the maximum number of connections the engine will hold to Postgres. The default is still 24, but it is now configurable. The minimum is 2, and kwild will fail quickly if the user configures 1.I did some local testing to ensure that this applies backpressure properly, and can confirm it does. A the number of long-running concurrent reads repeatedly execute against a database, the average time per query increases. I tested it with both reads from the same table (which actually get backpressure from Postgres's lock contention before the max connections are an issue), as well as with separate tables (which obviously did not have issues with lock contention).