Open itegulov opened 1 year ago
The fact that we can execute 1 email 2+ times makes us a centrilized solution :/ Especially if we use a traditional DB. The solution to this problem should lay on-chain IMO. We should keep some king of meta-data that will help us to understand what is the last email that was processed. Not sure if Ordinal number can garantee that.
The fact that we can execute 1 email 2+ times makes us a centrilized solution :/ Especially if we use a traditional DB.
I don't think it makes us a centralized solution, it just makes us an insecure solution :sweat_smile:
The solution to this problem should lay on-chain IMO.
Yes, I mentioned this in the second paragraph, but just to repeat: relayer is just one of the actors that can submit emails to the controller contract. Hence, the replay protection should go into the controller contract. Something like keeping track of email SHAs that we have already received might suffice.
We should keep some king of meta-data that will help us to understand what is the last email that was processed. Not sure if Ordinal number can garantee that.
That's an interesting idea. It also makes me think there could be a vector of attack where a rogue relay could skip certain emails. Let's say the relay receives "transfer 10 NEAR from A to B" and "transfer 20 USDC from B to A". It chooses to ignore the latter and A is screwed. Not sure if this makes sense, but in any case I think we can safely ignore this for now as rogue relay would not be able to gain anything by ignoring key recovery emails.
To summarize, I don't think anything we do inside of relayer's architecture would impact our security or centralization, so we should be free to choose whatever suits our needs and schedule best.
Currently, relay treats its startup as the "genesis" and ignores all existing emails. Obviously, that's not how it should work as we will have downtime, server restarts etc. We should be able to tell which existing emails we have already processed and which we haven't.
This is sort of tangential, but we should think about replay attacks sometime in the future when we decide to ship transfers. The replay protection logic should go into the auth contract itself though as the relay is not the only actor that can submit emails to the auth contract. This means that for now we could afford some imprecision in detecting which emails we have not processed: repeating
init
,add_key
anddelete_key
does not affect anything.I can see two major routes we can go here:
Data Persistence
Let's persist all emails we have processed in a DB. Then, on startup, we identify the first missing email number (defined by ordinal number) and try to fetch starting from there.
The logic here can get complex pretty fast. Also, it will be impossible to guarantee consistency with the submitted transaction (see the second paragraph), meaning potential double-submitted transactions. On the upside, introducing persistence can benefit us in the long run. Many other features down the line require it: retry mechanisms, smtp replies (e.g. occasionally poll the status of pending transactions, mark them as complete and send a reply through smtp) and other things.
Inspecting Chain Data on Startup
Let's use RPC/indexer to see the last relevant tx for auth contract, decode its email ordinal number and just start running relay from that ordinal number.
This will be very simple to implement, but will also be very inflexible. Also, adding any ZK stuff might kill this approach.
CC @volovyks