Open jcape opened 2 years ago
However, this assumption is not enforced by the database:
I think Sam actually already did the first part of this, and this is now enforced by the database
We didn't do the "maybe" part, I think that's what is remaining here
There are two distinct types of counts used by fog view:
Highest Known Block Count is used for two purposes:
Highest Processed Block Count is the block height which has been accounted for by fog view, nominally loaded into the view service's ORAM. Computing this number accurately is critical, because reporting the wrong number to clients will cause fog clients will show potentially incorrect balances (related: #1399).
In order to figure out what HPBC is, we need to know:
Once we have determined what the Highest Processed Block Count is, we can publish the Highest Known Block Count such that we have fully satisfied our obligations for all blocks up to that point.
Then we look at what data have we successfully loaded into the enclave—what blocks and for what keys.
In the code (block tracker in fog view) we increase the highest processed block count as high as we can until we are missing some data, or until there could be a new obligation that appears, since that is happening in parallel.
Fog view server assumes that a new obligation cannot appear before "highest_known_block_count", and currently it computes this in an ad hoc way based on its interactions so far with the fog database:
https://github.com/mobilecoinfoundation/mobilecoin/blob/75e92da91953a720f2e856f17e2e61eb1d9b0647/fog/view/server/src/block_tracker.rs#L177
However, this assumption is not enforced by the database:
https://github.com/mobilecoinfoundation/mobilecoin/blob/75e92da91953a720f2e856f17e2e61eb1d9b0647/fog/sql_recovery_db/src/lib.rs#L193
When ingest creates a new ingress key, it can specify any start_block value, and it currently does this using ledger_db.num_blocks(). But if its ledger_db is slow (because of a race), then this value may be very low, much older than the actual current number of blocks.
The goal of the ticket is:
fog_sql_recovery_db
so that when ingest proposes astart_block
for a new key, the SQL checks what the highest known block in the database is, and takes the min of these two, and that is what is actually written to the database, so that a new ingress key can never suddenly appear before the "current" block height in the database. The call tonew_ingress_key
should return the actual start block value, and fog ingest should respect this.highest_known_block_count
it just gets this number from the database. It may be that the current code isn't broken and then we don't need to change it. But we should evaluate if it makes sense to try to make it simpler.