tezos-checker / checker

An in-development "robocoin" system for the Tezos blockchain
25 stars 16 forks source link

Simplifying liquidation auctions #53

Open purcell opened 3 years ago

purcell commented 3 years ago

Right now the liquidation auction process is very complex, and results in a lot of generated Michelson, largely because of the AVL implementation. The complexity is considered a risk in terms of lurking bugs, as well as contract size and operational cost on-chain. In this issue we outline a different approach to structuring the liquidation auctions which allows for a simpler implementation. Feedback on unforeseen market effects is requested.

Limited size of liquidated collateral chunks

When the liquidation of a burrow is requested, limit the amount liquidated to a moderate but fairly high amount, e.g. 1000xtz. This implies multiple liquidation requests might be necessary to bring a large burrow back into an adequately collateralised state, and also that multiple 1xtz liquidation rewards might be paid out in the process. The goal of this change is to remove the need for splitting liquidated collateral chunks across auctions.

More relaxed batching into lots

As collateral chunks are queued for auction, they will simply be assigned to the next queued auction lot. If the lot is already larger than a certain threshold, e.g. 10,000xtz, then a fresh lot will be created in the queue. (Due to the limited chunk size above, the lots will generally still remain around 10,000xtz.) The goal here is to avoid splitting/re-splitting chunks into lots at a later time. An assumption is that the primary purpose of the original batching mechanism was to avoid auctioning small amounts individually, potentially leading to the system becoming overwhelmed.

Limit auction depth by limiting queued lots

When too many ~10,000xtz lots are queued, refuse to allow further liquidations until lots have been cleared. (There is already a simple limit based on AVL tree height, at which point further liquidation of burrows is blocked.) Setting such a limit to a fairly low number allows us to use a regular Michelson list to store lots and then iterate over them when we need to, which simplifies the implementation.

Ability to allow cancellation via simple removal from queued lots

Cancellation of chunks will not be allowed when a chunk is in a currently-open auction, but if a chunk is queued in a lot for a future auction, then cancellation could be permitted, via simple removal. This would mean that in the presence of many cancellations, upcoming lots could potentially shrink well below 10,000xtz. An auction could potentially combine multiple queued lots if they have shrunk significantly, but it would be preferable to avoid this complication. (We assume that lots would not become so small that their size would skew the auction price.)

Reconciling burrow state

Currently burrows track how much collateral is at auction, so that the expected sale can be used to determine whether the burrow can be subjected to further liquidation. This means burrow state needs to be reconciled with auction state after an auction finishes.

In the existing design, "slices"/"chunks" of liquidation are "touched" to reconcile them against their origin burrows and to transfer the won collateral to the winner. This "touching" happens via a checker entrypoint (which is a very low-level operation based on off-chain analysis) or by checker itself routinely touching some of the oldest liquidation slices.

This means a new storage layout must allow efficient iteration of chunks within an auctioned lot and lookup/modification/deletion using IDs stored in burrows. We have some thoughts about (relatively simple) data structures for this, that we will note here in due course.

murbard commented 3 years ago

Finally, there is an alternative tree design to the AVL that I suggested which should be lighterweight in terms of implementation, this might be worth considering? In this design, your queue is a forest. When a tree at the back of the forest has the same height as the one next to it, you merge them, and recursively do so, like propagating a carry. It's a lot easier to implement than AVLs and thus potentially a lot lighter, though it does keep all of the logic of following pointers.

purcell commented 3 years ago
* Limited size of liquidated collateral chunks : this seems innocuous. Note however that some chunks will be smaller than that (if less than 1000 tez are to be liquidited)

Yes, definitely. Then they'd be batched up to make lots.

* More relaxed batching into lots : this seems relvatively innocuous as well, a lot could be between 10,000 and 20,000 tez for instance. This would let us deal with cancellations in a list. If two consecutive cells are less than 10,000 each : merge them. That way, the worst case scenario looks like (big cell) - (empty cell) - (big cell) - (empty cell) which slows down liquidation by a factor 2 at most. The risk to this attack is limited by the fact that you can't force a burrow to be in liquidation, you can only get it to f+ and hope that it drops to f-.

Yes, that was our thinking.

* Limit auction depth by limiting queued lot: a Michelson list has to be deserialized entirely, so its length is going to be pretty limited, but maybe that's OK. However, coupled with the ability to cancel liquidations, this creates a DoS attack where burrows are routinely liquidated and cancelled to fill the queue with empty lots which prevents real liquidations from happening. The suggestion above, to merge cells with adjacent cells can mitigate this. It might also be possible to remove the ability to cancel.

There might only be, say, 10 lots, each containing a couple of nats and a tez total, in which case it would probably be fine. You'd have to cancel all the liquidations in a lot to empty the lot out, but yes, that could happen if you could effectively fill a lot with a single burrow.

Fine with disallowing cancellations tbh, the notes above were about how we might still be able to support them.

* Ability to cancel via simple removal: it's not that simple as discussed above. You are also forgetting that when the total volume in the auction queue gets very large, we increase the size of the auction batch to be a fixed percentage of the queue. We do accelerate to catch up when need be.

Yes, I think slices within a lot would not be stored in an O(n) structure.

* Reconciling burrow state: in general I'm thinking that moving the liquidation to a separate contract (not checker) could help a great deal. That contract still takes its instructions from checkers for things like cancellations, but for the rest it could be pretty autonomous.

It's an interesting possibility.

Finally, there is an alternative tree design to the AVL that I suggested which should be lighterweight in terms of implementation, this might be worth considering? In this design, your queue is a forest. When a tree at the back of the forest has the same height as the one next to it, you merge them, and recursively do so, like propagating a carry. It's a lot easier to implement than AVLs and thus potentially a lot lighter, though it does keep all of the logic of following pointers.

We'll take a look at this because it's a cool design, but I'm not confident that using it as a straight AVL replacement with all the other functionality intact would be a size win: the ability to simplify a lot of the code around auctions is a significant benefit. Ideally we'd find a design that minimises pointer fiddling.