Closed tkovis closed 3 years ago
Hey @tkovis, thanks for the report :pray: This looks like a duplicate of #1316 which was fixed in #1445 - this fix will be included in the next release. In the meantime, in your case (because you have a fixed set of transactions), await-tx
at the end is indeed a reasonable workaround.
Unrelated - generally speaking we tend to find that transactions around the ~1k size perform best in Crux. It obviously depends hugely on the content of the documents, but 1k seems a good rule-of thumb - above 1k seems to have diminishing returns. You only need then await the final tx, so that you're not waiting for Crux to catch up after every transaction.
HTH!
James
From experience I suspect the "problematic frame" Java_org_rocksdb_RocksDB_getSnapshot
is indicative of a native Out-Of-Memory event. Adding await-tx
will certainly be reducing the overall memory pressure to avoid this possibility.
Edit: I may well be mistaken :)
@refset it's not an OOM event - this throws a native error because getSnapshot
was being called on a closed RocksDB node, throwing the segfault.
I've been trying to find a smart way to bulk insert a dataset with 200k entries and each entry containing information about 4-8 entities. I'm checking unique constraints both before the transaction (crux/q + filter if constraint fails) and in a transaction function.
hs_err_pid27361.log hs_err_pid30052.log hs_err_pid30102.log hs_err_pid30194.log hs_err_pid30309.log hs_err_pid30370.log
Also sometimes I got
Same errors without partitioning and with an empty db
Playing around with different ranges (10k,20k,30k....) I could also produce this without a fatal exception at 50k
but after the previous error inserting just 10 resulted
With further testing I found out that everything is ok if you use
await-tx
Not sure if this is a real issue anymore