Currently it's possible for stale data coming from the large, slow batch-update task to arrive after data from the fast per-block task and overwrite the committed fresh data. So a trout race in some sense, but not the kind we want.
There are two reasonable fixes:
Each indexer task needs to be pinned to a particular block and the database needs to store the more recent block so that updates are idempotent. This is the more complicated, general solution.
Don't have the batch update task re-index the per-block task's data because it's wasteful anyway. This is a simpler solution that will split the code into additional paths.
Currently it's possible for stale data coming from the large, slow batch-update task to arrive after data from the fast per-block task and overwrite the committed fresh data. So a trout race in some sense, but not the kind we want.
There are two reasonable fixes: