Closed jasongoodwin closed 6 years ago
Hey,
Thanks for the report.
I think I already fixed that with commit: https://github.com/WegenenVerkeer/akka-persistence-postgresql/commit/74596afef4fb760efaa9769a19241ef7df5a75c6#diff-e2f0b6b1898906ec9fcbc7afba19fc4cR106
We use db.stream to stream the query from the PostgreSQL server (with backpressure). However I did not RTFM (http://slick.lightbend.com/doc/3.2.1/dbio.html) and forgot the .withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = 1000)
However that fix is apparently not yet released. I'll try to release a new version soon.
@jasongoodwin I released version 0.9.0 which contains the fix mentioned above for the streaming. The plugin should now be able to restore aggregates with lots of events. So I'll close this ticket.
If you find the time to test this release and you still have encounter a problem, please feel free to reopen.
For aggregates with extremely large histories, the sql query will select ALL records in the journal. While it uses streams, it still appears to make an insane query.
I introduced pagination in an older fork and this seems to work reasonably well. We've restored from millions of events. https://github.com/jasongoodwin/akka-persistence-postgresql/pull/1/files