stevearc / pypicloud

S3-backed pypi server implementation
MIT License
506 stars 141 forks source link

Connections kept open indefinitely, disconnected #291

Closed dvarrazzo closed 3 years ago

dvarrazzo commented 3 years ago

Hello,

I have configured pypicloud to use postgres.

db.url = postgresql://pypi:...@localhost/pypi

pypi.auth = sql
auth.db.url = postgresql://pypi:...@localhost/pypi

This cluster uses patroni for failover, so pypicloud connects to an haproxy instance which forwards the connection to the right node.

The problem is that haproxy doesn't allow the connections to be left open indefinitely and after a whlile closes them. When this happens, the first few requests to pypicloud terminate with errors, until some connections are re-established.

Is it possible to avoid to keep the connections open forever and get a new connection at each request?

stevearc commented 3 years ago

I pushed up the 'pool' branch that should allow you to try disabling connection pooling in SQLAlchemy, which I think will do what you want. Can you give it a try and set

db.poolclass = sqlalchemy.pool.NullPool
auth.db.poolclass = sqlalchemy.pool.NullPool
dvarrazzo commented 3 years ago

Hello Steven, thank you very much for addressing this

I can't test it right now: as soon as I can I'll get back on this.

Cheers

-- Daniele

dvarrazzo commented 3 years ago

Hello,

something related I have just noticed: for unrelated reason I had to change the configuration of pypicloud to connect directly to the database rather than go via haproxy.

As a result, I could see that pypicloud just doesn't close any connection: as this morning there were almost 50 open and were stopping other services to connect the database.

Killing the connections on the database causes a number of pagest to return 500 on the pypicloud pages.

I cannot really test checking out the code from a branch, because I've set up this service in a k8s cluster and I'm fetching the image. Could you provide an image to test, with the new feature, or just release it? Thank you!

stevearc commented 3 years ago

I've cut a release (1.3.3) and pushed new docker images to match. I did some of my own testing and it does seem like using the NullPool cleans up the connections when the requests complete. Let me know if you encounter any difficulties

dvarrazzo commented 3 years ago

Looks like it works as expected, thank you! (tested with 1.3.3-alpine) :slightly_smiling_face: