tlocke / pg8000

A Pure-Python PostgreSQL Driver
BSD 3-Clause "New" or "Revised" License
515 stars 46 forks source link

No way to detect a closed connection #114

Closed jvhellemond closed 9 months ago

jvhellemond commented 2 years ago

Hi,

In order to reuse a connection, I assign it to a variable an keep it around. Sometimes, when the connection has been open for a long time, the server understandibly closes it. I haven't found a stable way of detecting this. If I could, I could refresh the cached connection. Currently, if I try to use a closed connection, I have to wait for a timeout from the socket and eventually an exceptions.InterfaceError from pg8000:

Traceback (most recent call last):
  File "/Users/jvhellemond/.pyenv/versions/loqaldelivery_api/lib/python3.9/site-packages/pg8000/core.py", line 287, in sock_read
    return self._sock.read(b)
  File "/Users/jvhellemond/.pyenv/versions/3.9.9/lib/python3.9/socket.py", line 704, in readinto
    return self._sock.recv_into(b)
TimeoutError: [Errno 60] Operation timed out

which causes:

  File "/Users/jvhellemond/.pyenv/versions/loqaldelivery_api/lib/python3.9/site-packages/pg8000/native.py", line 203, in run
    self._context = self.execute_unnamed(
  File "/Users/jvhellemond/.pyenv/versions/loqaldelivery_api/lib/python3.9/site-packages/pg8000/core.py", line 669, in execute_unnamed
    self.handle_messages(context)
  File "/Users/jvhellemond/.pyenv/versions/loqaldelivery_api/lib/python3.9/site-packages/pg8000/core.py", line 806, in handle_messages
    code, data_len = ci_unpack(self._read(5))
  File "/Users/jvhellemond/.pyenv/versions/loqaldelivery_api/lib/python3.9/site-packages/pg8000/core.py", line 289, in sock_read
    raise InterfaceError("network error") from e
pg8000.exceptions.InterfaceError: network error

I tried checking <connection>._usock._closed, but that remains False. Any suggestion? I'd be happy to work on this or submit a PR. Please hmu if this issue is unclear. Thanks!

tlocke commented 2 years ago

Hi @jvhellemond, one idea might be to use a connection pool, such as one from SQLAlchemy https://docs.sqlalchemy.org/en/14/core/pooling.html they can be used with DBAPI connections. I've used SQLAlchemy on long-running servers and haven't had any connection problems that I can remember. I think it's because each connection has a lifetime, and that lifetime is shorter than the time the server takes to close old connections. So maybe you could use that strategy of maximum idle times for your connection pool?