Open HelloKevinTian opened 4 years ago
hmmmm using amazon redshift there's a slight chance they have a minimum cursor size or they don't 100% respect the length read. They implement the postgres wire protocol but I'm not sure they follow it to the letter in all cases, since it's an analytics database. Does pg-cursor work with a normal postgres instance for you? Does cursor based reading work with a non-js driver? I don't have access to a redshift instance to test this myself. If you have 10,000 rows in the redshift instance does it still read all 10,000 at one time?
Hi,thanks for your response firstly. I have made some other test for the cursor. For now I have the following results.
---Original--- From: "Brian C"<notifications@github.com> Date: Sat, Mar 21, 2020 00:51 AM To: "brianc/node-postgres"<node-postgres@noreply.github.com>; Cc: "Author"<author@noreply.github.com>;"Aquarius"<sunny_kevin@qq.com>; Subject: Re: [brianc/node-postgres] pg-cursor not work!! (#2140)
hmmmm using amazon redshift there's a slight chance they have a minimum cursor size or they don't 100% respect the length read. They implement the postgres wire protocol but I'm not sure they follow it to the letter in all cases, since it's an analytics database. Does pg-cursor work with a normal postgres instance for you? Does cursor based reading work with a non-js driver? I don't have access to a redshift instance to test this myself. If you have 10,000 rows in the redshift instance does it still read all 10,000 at one time?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
For cursor I use the following function:
export async function* iterate(client: PoolClient, query: QueryConfig, chunkSize: number) {
const name = uuidv4(); //or some unique name
try {
await client.query(`DECLARE "${name}" NO SCROLL CURSOR FOR ${query.text}`, query.values);
while (true) {
const record = await client.query(`FETCH ${chunkSize} "${name}"`);
if (record.rows.length===0) {
break;
}
yield record;
}
} finally {
try{
await client.query(`CLOSE "${name}"`);
}catch(err){
console.log(err);
}
}
}
and use the function like :
const iterator = iterate(client, {text: 'select * from generate_series(0, 500)'}, 10);
for await (let r of iterator){
console.log(r);
}
I'm having a similar problem against Pgpool-II, since version 4.1.0 (It works with 4.0.25).
The following code works against postgres, but not Pgpool-II (This is NOT pg-pool from this lib)
import pg from 'pg';
import Cursor from 'pg-cursor';
const pgClient = new Client();
const cursor = pgClient.query(new Cursor('SELECT * FROM generate_series(1, 500)'));
let rows = [];
do {
rows = await cursor.read(100);
} while (rows.length > 0);
cursor.close();
I'm running into a timeout against Pgpool-II.
The provided workaround of doing it manually (see https://github.com/brianc/node-postgres/issues/2140#issuecomment-616638067) works in this case too.
Issue also opened on Pgpool-II side: https://github.com/pgpool/pgpool2/issues/59#issue-2416508532
EDIT: after re reading this issue, I'm not sure it's really related (even if the same workaround apply), tell me if you prefer another issue
@Cactusbone That’s probably going to be the “statement-level load balancing”. Indeed not related to this issue, and probably not related to node-postgres, so you don’t need to open another issue.
HI,here is my test code.
The dim_app table which from AWS Redshift contains 100 items,I just want to test pg-cursor with getting 3 by cursor,but I always get 100 in the response.
Anyone can help? Thanks a lot.