Background compaction process crashes with OutOfMemoryException. Merging loads all data on heap and heap gets exhausted. We need to:
replace ByteArrayWrapper with byte[] in internal structures. Profiler shows 50% rmemory eduction with 32 byte keys/values
modify compaction to use streaming, rather than full memory load
Exceptions:
io.iohk.iodb.LSMStore$$anon$1 run
SEVERE: Background task failed
java.lang.OutOfMemoryError: GC overhead limit exceeded
at io.iohk.iodb.FileAccess$FILE_CHANNEL$.readData(FileAccess.scala:299)
at io.iohk.iodb.FileAccess$FILE_CHANNEL$.$anonfun$readKeyValues$2(FileAccess.scala:337)
at io.iohk.iodb.FileAccess$FILE_CHANNEL$.$anonfun$readKeyValues$2$adapted(FileAccess.scala:335)
at io.iohk.iodb.FileAccess$FILE_CHANNEL$$$Lambda$102/710072189.apply(Unknown Source)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
at scala.collection.TraversableLike$$Lambda$9/1068934215.apply(Unknown Source)
at scala.collection.immutable.Range.foreach(Range.scala:156)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at io.iohk.iodb.FileAccess$FILE_CHANNEL$.readKeyValues(FileAccess.scala:335)
at io.iohk.iodb.FileAccess$SAFE$.readKeyValues(FileAccess.scala:394)
at io.iohk.iodb.LSMStore.$anonfun$keyValues$1(LSMStore.scala:945)
at io.iohk.iodb.LSMStore$$Lambda$25/644460953.apply(Unknown Source)
at scala.collection.immutable.List.map(List.scala:276)
at io.iohk.iodb.LSMStore.keyValues(LSMStore.scala:941)
at io.iohk.iodb.LSMStore.taskShardMerge(LSMStore.scala:786)
at io.iohk.iodb.LSMStore.$anonfun$taskSharding$11(LSMStore.scala:751)
at io.iohk.iodb.LSMStore$$Lambda$101/1892423954.apply$mcV$sp(Unknown Source)
at io.iohk.iodb.LSMStore$$anon$1.run(LSMStore.scala:473)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Another possible problem is exception thrown after compaction fails. We need better recovery for failed background tasks:
[error] Exception in thread "main" java.lang.IllegalArgumentException: Negative position [error] at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:718)
[error] at io.iohk.iodb.Utils.readFully(Utils.java:189)
[error] at io.iohk.iodb.FileAccess$FILE_CHANNEL$.readInt(FileAccess.scala:313)
[error] at io.iohk.iodb.FileAccess$FILE_CHANNEL$.getValue(FileAccess.scala:260)
[error] at io.iohk.iodb.FileAccess$SAFE$.getValue(FileAccess.scala:376)
[error] at io.iohk.iodb.LSMStore.getUpdates(LSMStore.scala:660) [error] at io.iohk.iodb.LSMStore.get(LSMStore.scala:691)
[error] at io.iohk.iodb.bench.SimpleKVBench$.$anonfun$bench$7(SimpleKVBench.scala:80)
[error] at io.iohk.iodb.bench.SimpleKVBench$.$anonfun$bench$7$adapted(SimpleKVBench.scala:79) [error] at scala.collection.Iterator.foreach(Iterator.scala:929)
[error] at scala.collection.Iterator.foreach$(Iterator.scala:929)
[error] at scala.collection.AbstractIterator.foreach(Iterator.scala:1406) [error] at scala.collection.IterableLike.foreach(IterableLike.scala:71)
[error] at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
[error] at scala.collection.AbstractIterable.foreach(Iterable.scala:54) [error] at io.iohk.iodb.bench.SimpleKVBench$.$anonfun$bench$5(SimpleKVBench.scala:79)
[error] at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:156)
[error] at io.iohk.iodb.bench.SimpleKVBench$.$anonfun$bench$4(SimpleKVBench.scala:72)
[error] at io.iohk.iodb.TestUtils$.runningTimeUnit(TestUtils.scala:65)
[error] at io.iohk.iodb.bench.SimpleKVBench$.bench(SimpleKVBench.scala:70)
[error] at io.iohk.iodb.bench.SimpleKVBench$.main(SimpleKVBench.scala:26)
[error] at io.iohk.iodb.bench.SimpleKVBench.main(SimpleKVBench.scala)
Background compaction process crashes with
OutOfMemoryException
. Merging loads all data on heap and heap gets exhausted. We need to:replace
ByteArrayWrapper
withbyte[]
in internal structures. Profiler shows 50% rmemory eduction with 32 byte keys/valuesmodify compaction to use streaming, rather than full memory load
Exceptions:
Another possible problem is exception thrown after compaction fails. We need better recovery for failed background tasks: