Closed aljazerzen closed 9 months ago
RowsIter
does not buffer all of the rows in memory - if you have a large response and only read a little bit of it the Postgres server will block trying to send the remaining rows.
Depending on your access patterns and the size of the response, you may want to consider creating a portal or using a COPY query so the server can handle the gradual reads more efficiently.
Thanks for quick response!
So you are saying that if RowsIter
does not read the next batch of rows, TCP buffer will not be free and there will be blocking on TCP level? So the server will not send more rows?
And you are saying that I should use Transation::query_portal_raw or Transation::copy_out? Are there any other downsides to using these two instead of plain query?
I tried to query this:
WITH RECURSIVE t(n) AS (
VALUES (1)
UNION ALL
SELECT n+1 FROM t WHERE n < 100000000
)
SELECT n FROM t;
... and read only first 1024 rows, but it still took quite a while do that.
Yep, it blocks at the TCP level.
Those are the ways to work with a portal or a copy query, yes.
Say that I issue a query via
Client::query_raw()
and that query is to return gigabytes of data.If my application is slow and takes some time between consumption of the
RowsIter
, will this library load the whole result into the memory and just iterate over it, or will it load the result "on demand"?If my application stops consumption of
RowsIter
and drops it after first few rows were consumed, will this library load the whole result or only the required first few batches?I guess this is not really a question about this library but a question about PostgreSQL wire protocol.