0xPolygonMiden / miden-client

Client library that facilitates interaction with the Miden rollup
MIT License
25 stars 22 forks source link

Revisit tag usage for imported notes #341

Open igamigo opened 1 month ago

igamigo commented 1 month ago

Currently, for notes that are exported without inclusion proofs on a client $A$ and imported on another client $B$ (where the note is created on a future block compared to client $B$'s view of the blockchain), the second client faces a potential problem of never getting inclusion proof data from syncs based on whether the note tag are actually of interest to it. To solve this we currently add the tag of non-committed notes at the moment of creating a sync request (in an "ephemeral" manner, meaning they won't get added the time after getting sync data for the note). However, as discussed, this puts more effort on the node and can likely be avoided in a number of different ways (using other RPC calls, etc.).

bobbinth commented 1 month ago

One potential way to do this may be to mark the notes which are missing MMR part of the proof somehow and then, once the client syncs beyond the block in which the note has been created, make a request to GetBlockHeaderByNumber (with include_mmr_proof = true) to get the MMR inclusion proof for the relevant block (if needed).

So, let's say the client is currently at block 100, and we've imported the note with the inclusion path against block 150. Let's also say that after next sync the client syncs to block 200. At this point, the client would check if it has any notes which are missing MMR proofs, and then make a request to GetBlockHeaderByNumber(block_num=150, include_mmr_proof=true). The returned MMR proof (and the block header) would be used to make the note "consumable".

igamigo commented 1 month ago

There's a bunch of variables for scenarios like this. Namely:

There is also the option of the user having set verify at import time, to verify the note's existence.

Not all scenarios are interesting, but here are the relevant ones to this issue:

For (!has_inclusion_proof, !client_has_mmr):

If synced_past_block is false: At import time, we have the option to build the inclusion proof if the caller set verify_existence to true (by calling GetNotesById and GetBlockHeaderByNum). We are currently adding the tag to the sync request, which gets us both the MMR data and the note's inclusion proof. We want to avoid this, but it won't be possible without eventually calling both GetNotesById and GetBlockHeaderByNum. I think originally we discussed not always verifying the note's existence because it would incur some privacy loss (the node now knows which note the client is interested in) which is why we started adding the tag to requests.

If synced_past_block is true: Same as above, except if the user does not verify the note's existence at import time, the client is softlocked with the note being unusable (unless re-imported) because adding the tag to the sync request does not get the client anything it needs.

If the note has an inclusion proof, but the client does not have MMR data, the client behaves the same way except it could avoid calling GetNotesById (it doesn't avoid it currently, this call is only based on whether the user set verify to true at import time).

So one way to make this all work is to follow @bobbinth's flow, but also potentially getting the note's inclusion proof at sync time as well: After syncing, for any note that does not have an inclusion proof, call GetNotesById and saving it (we can call it only once for multiple notes). After this, check whether we have all the necessary MMR data and get nodes for any block we need (one call per block). The downside of including this GetNotesById call is that we incur into the privacy loss I mentioned before. This is unavoidable if we don't know whether the note was included or not, unless we stop supporting exporting note records that don't have inclusion proofs. The other downside is that if the client has bad notes (eg, notes that were never eventually included for any reason), it could be a constant overhead forever. For this last problem, the client could eventually mark the note as never included and stop considering it? Depending on the different flows the protocol might support it might be difficult to tell.

bobbinth commented 1 month ago

The other downside is that if the client has bad notes (eg, notes that were never eventually included for any reason), it could be a constant overhead forever. For this last problem, the client could eventually mark the note as never included and stop considering it? Depending on the different flows the protocol might support it might be difficult to tell.

It feels like we may need to have some more sophisticated strategy here. For example, something like "exponential backoff" - if the expected note hasn't appeared on chain, double the time until the next check. We can also set some upper limit: once the interval exceeds some upper bound, stop checking and mark the note as "stale" or something like that.

mFragaBA commented 4 days ago

should this be closed considering #375 got merged?