As we have multiple services accessing a single PostgreSQL database, it doesn't make sense that every instance maintains its own database connection pool. The database server can only handle so many concurrent connections, we'll likely need a way to serialize access to avoid excessive contention.
The go-to solution for this is PgBouncer, which can be deployed "in front of" PostgreSQL, to handle connection pooling for multiple applications.
I did some initial testing, but ran into very poor performance. Highly likely I did something wrong, so needs more research.
What I did so far is adding bitnami/pgbouncer to docker-compose.yml:
PgBouncer 1.21.0 was just released which includes support for prepared statements. The missing PS support could explain the bad performance when I originally tested the setup.
As we have multiple services accessing a single PostgreSQL database, it doesn't make sense that every instance maintains its own database connection pool. The database server can only handle so many concurrent connections, we'll likely need a way to serialize access to avoid excessive contention.
The go-to solution for this is PgBouncer, which can be deployed "in front of" PostgreSQL, to handle connection pooling for multiple applications.
I did some initial testing, but ran into very poor performance. Highly likely I did something wrong, so needs more research.
What I did so far is adding
bitnami/pgbouncer
todocker-compose.yml
:And replacing the JDBC URLs of all services to point to PgBouncer instead of PostgreSQL directly:
Further, for the API server, I disabled application-side connection pooling with:
And for Quarkus-based services with:
Once running, connecting to PgBouncer can be done like this:
It is then possible to issue PgBouncer commands:
With this setup, everything "works", but gets really slow under load, e.g. when uploading lots of BOMs.