Closed sstuddard closed 4 years ago
Looks like that exception is coming from deep inside PostgreSQL itself: https://github.com/pgjdbc/pgjdbc/blob/REL42.2.5/pgjdbc/src/main/java/org/postgresql/core/PGStream.java#L410-L422
You should be able to stream very large result sets lazily (from the JDBC p.o.v. rather than Clojure's p.o.v.) if you set things up correctly. In general, that means you need auto-commit OFF for your connection and you need to specify the fetch size in your next.jdbc
call. See https://cljdoc.org/d/seancorfield/next.jdbc/1.0.13/doc/all-the-options (unfortunately, this is one of those areas that databases vary a lot so PG may need a +ve fetch size or a -ve one or zero and/or auto-commit being turned off or...).
Closing this out, since it isn't a bug in next.jdbc
. Feel free to continue to discuss the issue here tho'...
Thanks for the pointers Sean! You were spot on. For future me when I google this problem again in a couple years:
{:auto-commit false}
plan
, tuning fetch-size to your use case:
{:fetch-size 4000
:concurrency :read-only
:cursors :close
:result-type :forward-only}
Glad you got it working -- and thank you for reporting the solution back here.
I've updated the Tips & Tricks section of Friendly SQL Functions to include these options in the PostgreSQL section.
While using
jdbc/plan
on a large read, I'm getting ajava.lang.NegativeArraySizeException
. A few hits searching indicate it may be because of a > 1GB payload. In this case, no single record is that size. I happen to know the total output of this in CSV amounts to about 14GB.The operation works fine with a
LIMIT
on the data.Unsure if this is an issue with jdbc, next.jdbc, or PostgreSQL.
My dependencies are:
The exception: