Open pigmej opened 9 months ago
it is definitely not as simple as batching writes with some deadline. to start with, sqlite write tx will block all other writes, so it can't used to accumulate multiple writes, and hence there is a need for other auxiliary type to store writes. or compiling sqlite with an extension that allow retriable concurrent writes, but it is likely not worth it.
the other big problem is an assumption in application that if dependant object is saved, then the dependency must be written to the store before that. to demonstrate, consider ballot -> atx dependency, if ballot is saved and atx was discarded due to power off then application will not be able to recover missing links. so if batching will be implemented it has to be implemented for general purpose and not only for atx.
another consideration is that fetcher/gossip is using db to check object for existence to determine if it should ask peer/allow gossip for it. this part will have to be adjusted too, otherwise there will be more unpredictable behavior with sync/gossip.
the last thing that came up is that if it will be implemented then persisting collection of objects should be done in the background goroutine, and not the goroutine that used for validatinng object. as otherwise it will introduce more unpredictable latency.
Ok then it means "too hard", thanks for the info.
We should have a possibility to make bulk ATX write to the DB. Currently, when there are a lot of ATXes we're doing way too much iops by storing all of them in separate transactions. We should do similar logic as we have on poet side with bulk challenges writing.
If this is hard, instead, we should focus on the ATX merge.