Closed willruggiano closed 5 months ago
Thanks for reporting. I got that once too but I can't replicate it. It seems to be related to non freed resources at a first thought.
If you (or anyone) finds a way to replicate this, please let me know.
Hello @kndndrj,
TL;DR: I can replicate and (potentially) fix this issue.
First of all I just wanted to thank you for this wonderful plugin. I especially love the call log functionality, it's just incredibly useful.
But to the topic at hand. I encountered this issue as well and came to the conclusion that Dbee more or less never closes connections to PostgreSQL. As max_connections
is 100
by default in Postgres you have to have a pretty long session to run into it or work against a database that already is close to the maximum.
It's easy to tell that connections accumulate by for example executing the following query multiple times with BB
:
SELECT pid, state, wait_event, query FROM pg_stat_activity;
The result will include one additional row with that query for each query. It will look something like this:
│ pid │ state │ wait_event │ query
───┼─────┼────────┼─────────────────────┼─────────────────────────────────────────────────────────────
1 │ 63 │ <nil> │ AutoVacuumMain │
2 │ 64 │ <nil> │ LogicalLauncherMain │
3 │ 92 │ idle │ ClientRead │ SELECT pid, state, wait_event, query FROM pg_stat_activity;
4 │ 93 │ idle │ ClientRead │ SELECT pid, state, wait_event, query FROM pg_stat_activity;
5 │ 95 │ idle │ ClientRead │ SELECT pid, state, wait_event, query FROM pg_stat_activity;
6 │ 96 │ active │ <nil> │ SELECT pid, state, wait_event, query FROM pg_stat_activity;
7 │ 60 │ <nil> │ BgWriterHibernate │
8 │ 59 │ <nil> │ CheckpointerMain │
9 │ 62 │ <nil> │ WalWriterMain │
As we can see, the server is waiting for the client (Dbee) which has no intention of doing anything else with that query.
First start a PostgreSQL server with a low value for max_connections
. Below is a Docker command to start with a value of 4 and a Dbee connection string to connect to it.
docker run --rm -e POSTGRES_PASSWORD=postgres -p 5432:5432 -d postgres:16 -N 4
export DBEE_CONNECTIONS='[
{
"name": "postgres",
"url": "postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable",
"type": "postgres"
}
]'
Note: Don't set
-N
to anything less than4
, the server will not be able to start due to reserved connections for the superuser
Then start Dbee and execute any 4 queries, filling the max connections. The 5th query will then produce the following error:
Call execution failed after 0.006 seconds
Reason:
c.db.Conn: pq: sorry, too many clients already
When a result set returns something, we hit this code path on the core client:
That callback is executed when ResultStream.Close()
is called, but at least in the case of the PostgreSQL part of Dbee, I can't see that actually ever happening.
I'll open a PR in a moment where I do close ResultStream
s in some places that I found by haphazardly tracing around in the Postgres-relevant parts of the code base.
I eventually get this error after running queries with dbee for an extended period of time (keeping the dbee windows open).