Open periclesroalves opened 6 years ago
hmmm that's strange - can you submit an example piece of code I can use to reproduce locally? That shouldn't happen.
@periclesroalves sounds like you are mistakenly using a connection from the pool. It is not suitable for LISTEN/NOTIFY
, you need to use a permanent connection for that, i.e. Client
created directly, and used without releasing it.
@brianc this small example that still gives me the same problem:
const { Client } = require('pg');
(async () => {
const client = new Client({ user: '', host: '', database: '', password: '', port: 5432 });
client.on('notification', console.log);
await client.connect();
await client.query(`LISTEN mychannel`);
setInterval(async () => {
console.log('query');
await client.query('SELECT TRUE');
}, 10000);
})();
I use SELECT pg_notify('mychannel', '' || random());
a couple times, but the notiffication
callback is only called in bursts, right after the query in the interval, rather than being fired as soon as the notification is available. The behavior is the same if I wrap pg_notify
within an INSERT
trigger. As I mentioned, the code works fine against a local PG database, but the behavior changes as soon as I point it to the remote DB.
@vitaly-t I'm using a dedicated connection. In fact, the notifications only arrive when I issue a query against this specific connection. If I remove the interval query or simply query against the pool, the notifications never arrive.
the behavior changes as soon as I point it to the remote DB.
This library's behavior doesn't change when switching from the local to a remote database. The things that do change - your database configuration, network latency, database load and database priority. Those are the things that may affect how quickly notifications arrive. It could be that your database provider doesn't support notifications correctly, as those have the specific of being fired by the server and into the client, i.e. without any request from the client. Some servers may disable such functionality on purpose.
Are you using Azure, @periclesroalves? I got the same problem using the Azure SQL Postgresql
@robsonpeixoto yes, this behavior is specific to Azure PostgreSQL. I went through the server configuration options, but couldn't find anything that changed this behavior at the time.
@periclesroalves Just a guess but maybe they're running something in between your client and the actual database on Azure that's pretending to be a full connection but in reality is some type of pooler. I recall someone from Microsoft mentioning that they have that type of thing in regards to a different topic (see: https://www.postgresql.org/message-id/flat/DM2PR03MB4168F3C796B2965FDC4CF9991C00%40DM2PR03MB416.namprd03.prod.outlook.com)
As it's specific to Azure, I'd suggest reaching out to Microsoft / Azure support to see if they can recreate this issue
I contacted the Azure support and they ask at least 4 months to fix the bug. If change the cloud provider is not an option, keep the hack:
setInterval(async () => {
console.log('query');
await client.query('SELECT TRUE');
}, 10000);
Does your remote DB use PgBouncer for connection pooling? If so, your pool might be configured with the wrong pool_mode
: https://www.pgbouncer.org/config.html.
Only the session
mode allows connections that persist beyond a single transaction or statement.
I'm having the same issue on a private postgres installation in a docker swarm. Or rather, it works initially, but after a while the connection goes stale or something and I stop receiving notifications 🤔
The keepalive hack seems to work for me too 🙌 thanks!
In my project on local pc works correctly 24/7, but when i use it with docker it doesn't receive any notification event after a while, also i have spot that disappearing idle session in Database Session manager
I have same issue. Has anybody gotten a solution?
I have same issue. After a while nodejs connect to Postgres with LISTEN stopped on Postgres DB
NAT gateways, firewalls, and other middlewares can silently drop idle connections. This probably applies, if your database is in some public cloud, e.g. AWS or Azure.
Similar issues can be observed with slow (> 5 minutes) queries.
If this is what's causing your issue, you can try one or more of the following mitigations:
LISTEN
connection) to make sure the connection is / stays alive.
The heartbeat query should have a short timeout (with the query_timeout
connection parameter, or with a custom solution).
As long as the heartbeat query can succeed, you should receive your notifications.
If the heartbeat query fails or times out, you need to reestablish the LISTEN
connection.
LISTEN/NOTIFY seems to work as expected when testing locally: every time NOTIFY is run I get a 'notification' event. However, as soon as I change to a remote DB, the behaviors changes, and I only get 'notification' events after actively doing something in the connection (e.g., query).
The issue is similar to the one exposed in this Stack Overflow thread/comment
Currently this means that I have to keep pooling by doing a dummy query at a fixed interval. Is there a known fix for this? If not, is there a better/more elegant way of doing constant pooling?