Closed warmwaffles closed 2 years ago
We do multi-stepping when fetching. It was simply too slow to fetch individual rows and return back to vm.
which is called from
and consumed by
@warmwaffles I'm seeing a segfault while the NIF is running multi_step (https://github.com/elixir-sqlite/exqlite/pull/191) when the outer db connection timeouts. So something seems to still be off, even with returning things to elixir in small chunks.
@LostKobrakai were you able to change the chunk size and still get the same error? Just trying to collect as much info as I can.
Also, how big do you think on average each one of those 66,000 records are?
I tried running the test in the PR I linked with chunk_size 1, 10, 1000, 10000 and it succeeded for the latter two. But increasing the number of records from 10k to 50k made it segfault again for all chunk sizes. It might just get fast enough with a larger chunk size, but still fail when hitting the timeout.
Okay, when I get some time today I'll dig into this more.
My PR should replicate the issue on the latest version. But yes I was confused why you reacted to the old thread as well :D
Hah, yea completely forgot I made this ticket to begin with. I had a case where I had a ton of time series data stored and retrieving it was difficult.
For a large table of 66,000 records
Encounter
This may be a reason why
esqlite
added bulk fetching rows instead of simply stepping. Although bulk fetching would be nice, the issue arises that an error happens midway through the step, and how to communicate that error back is a bit tricky.