serai-dex / serai

Other
242 stars 45 forks source link

Spin processors out into bespoke services #569

Open kayabaNerve opened 1 month ago

kayabaNerve commented 1 month ago

A single codebase to maintain and audit, an internal standard for interoperability, and a lack of duplicated code. This was the reasoning for writing the processor, singular.

To quote Douglas Adams,

This had made many people very angry and has been widely regarded as a bad move.

What really highlighted this is the Ethereum integration. It does not benefit from most of the fee code, due to using a relayer system (meaning Serai does not have to deduct the fee, solely calculate a fair fee rate). It does not benefit from the scheduling due to being in under the account model. It doesn't have branch/change addresses, as it has a constant external address deterministic to the first key generated.

It's all of this which effectively justifies a bespoke processor for Ethereum. While we currently are simply setting the fee to 0, causing the processor's code to effectively NOP, we now have to review every route and ensure it's actually NOP'ing as we do still call it (and simply expect a lack of effects).

Then, while working on the Ethereum integration, I realized: https://github.com/serai-dex/serai/issues/470#issuecomment-2066494173

To quote directly here,

For Bitcoin specifically, we can claim infinite max outputs, take in all outputs in the Bitcoin integration, and then create the tree (signing every item in the tree immediately as one gigantic batch).

And it's with this, I finally have to accept the design philosophy of Godot.

Any universal API we try to create will be one we fight. While the current API is fine, and with the Scheduler modularity achieves our necessary goals, the optimal solution for any given integration will be specific to that integration. Accordingly, the sane thing is for the processor to become building blocks. A key gen service. A UTXO log scheduler. A SC-based account scheduler. A relayer fee system. A deducted fee system. Etc.

I won't claim these can't be re-composed back to a monolithic service at some point. I will claim I don't believe it's feasible to modularize the current processor so incrementally to the solutions we want. We'd at least have to go back to the drawing board, explicitly declare optimal flows, then declare modules, then fit the processor. Alternatively, we can move to bespoke solutions (removing the requirement of maintaining a perfectly generic monolith), and when we have the time, later recompose.

I say that while we also can't so redo the processor before mainnet. This will probably be kicked to #565 which is slated for after mainnet :/