Open TalDerei opened 2 months ago
Spent a bit of time thinking about this today. In order to make the remaining saves atomic, we need:
Idb has support for atomic updates. We make use of it for tct updates. However, the staging area is important. We need a way to queue up all of the save actions that can later (within that same block) be accessed as if they were already committed.
For example, the infamous auction upsert loop. The comment + code placement is there because it depends on the auction NFT metadata already saved in the database before it can associate a note with the NFT.
Another example is handling epoch transitions. It relies upon requesting the appParams from the database. However, the block could have updated these. In an atomic world, it would not know for sure without checking the staging area first.
Queuing up saves (and at the end committing them all at once) is definitely possible. But I believe we need to add a staging area and update the getters to reference these prior to database lookups.
precursor to this should be addressing https://github.com/penumbra-zone/web/pull/1720, which distinguishes between tasks that should be managed by idb versus those that belong to the block processor.
staging areas are idiomatic for implementing atomicity. In terms of the granularity of these atomic operations, are we considering buffering updates and then applying them atomically every 1000 blocks during flushing, or every block?
maybe we should prioritize this next sprint following https://github.com/penumbra-zone/web/pull/1858?
processing blocks in the block processor isn't actually atomic, introducing the possibility of local state corruption.
references this comment