revault / research

4 stars 1 forks source link

Coin creation (fanout) #5

Open darosior opened 3 years ago

darosior commented 3 years ago
darosior commented 3 years ago

The downside of going with on-the-fly creation of fee-bumping UTXOs is that it introduces even more incertainty. Is the fee-bumping transaction going to confirm in the next block? Assuming it is, is the value of the created fee-bumping UTXO going to be enough for the new estimation? You basically need to double your CSV value and overpay everytime (which is already what you'd do with bitcoind fee est anyways).

The downside of going with an UTXO pool layed out in advance is that 1. we need to actually lay it out 2. it's going to deprecate. We need to lay it out in a way that we'll never have an utxo that is part of two vaults' reserves. Therefore the value is somewhat small, and if we don't want to go for a bulk "bump once with all the reserve and forget" we need to split it into multiple UTXOs, therefore creating more fees to pay and accelerating their deprecation. The optimal value of each UTXO should also be a moving (likely increasing) and bound target.

The tradeoffs both in term of security and complexity is clearly better with using a pool of UTXOs, albeit it's likely more expensive in most cases. Since all Cancel transactions have the same weight, all the UTXOs from each vault's pool are of the same value. This value can be computed at the beginning of operations out of the current[1] fee market. It can then evolve as UTXOs from the pool are consumed and re-created from a "refiller utxo" that we peel. We could even consume the lower-value UTXOs first, assuming it's not completely unreasonable wrt the required value.

[1] or analayzed, or historic, or??? Cf reserve

JSwambo commented 3 years ago

The WT will process its UTxO pool, the fee-bump coin pool, from re-fill txs initiated by their stakeholder. The WT will use the fee-reserve estimation and criteria for a well structured UTxO pool to split and collect UTxOs appropriately. It will aim to optimise the cost of this processing by transacting during low fee periods.

Criteria for the fee-bump coin pool

Coins that are used when fee-bumping cannot be split with change outputs, and must be consumed entirely (since the main input's signature is signed with the A1CP|ALL flag). Thus, one fee-bump coin will cover the majority of costs, with smaller denominations covering the difference due to changing market conditions.

An inevitable consequence of operating in an uncertain environment is that the accuracy of fee-reserve estimations will decrease from the time of coin creation until the time when the fee-reserve is needed during a fee-bumping process. Note that our fee-reserve estimation includes a buffer for fee-market volatility, and the likelihood of that buffer turning out to be in-sufficient is by design small. The intention is for each fee-reserve per vault to be sufficient to enable a conservative 1-block target fee offering during times where the network is highly congested.

The ability to re-callibrate fee-bump coin pool distribution with current estimations will mitigate the risk of decreasingly accurate fee estimations. This re-calibration can occur at daily or weekly intervals through automatic transactions by the WT. Trade-off: the more re-calibrating that is done, the greater the chance of accurate fee-bumping but the higher the cost of operating.

darosior commented 3 years ago

We just had a call about this. Jacob was thinking about a UTXO pool layed out with a large UTXO (that would be used to bring the Cancel transaction from its pre-signed feerate to the average market feerate) and very small [1] UTXOs that would be used to "bump between different values of estimatesmarfee", in clear this was a fine-grained approach. I was thinking of a bulkier one, in which we'd had all the UTXOs be of the same value, and this value would be the threshold by which we feebump.

It seems out of our discussion that a middle ground would be best. Using one "large" (which value is fees needed at average feerate in last month - fees it was pre-signed with) UTXO is gaining a lot in term of transaction size and can be a huge proportional win in case we are dealing with large setups and hence large transaction sizes in the future [2]. Using medium-size UTXOs of all the same value is making:

  1. Coin selection simpler (and the process/implementation easier to reason about)
  2. Deprecation slower (a larger value UTXO is going to be not reasonable to use to feebump after a lower value one)
  3. Pool to be "healed" by coin selection. With larger UTXOs there is a higher probability we can use, to fee bump a Cancel transaction, smaller UTXOs that were created from former fee-bumping transactions that use a smaller feerate than there is if this UTXOs were really small (they would basically sit there forever and always need a consolidation transaction).

[1] Lower bounded, of course, by the value needed to spend them (something like 60 virtual bytes i guess as it'd be a P2WPKH) + the incremental relay fee (which at the moment is 1 * virtual bytes of the new Cancel tx). Note how we don't care about the absolute fee rule as we are always increasing the size of the transaction anyways. [2] Note we are currently bottlenecked by Noise's 70k bytes, which is way below Bitcoin's 100k weight units.

darosior commented 3 years ago

1. Feebump coins creation procedure

So we could go with something along these lines: Let A be the average "next block feerate" over the last 30 days. Let S be the size of a fully-signed Cancel transaction. Let P be the feerate at which the Cancel transaction is pre-signed. Let R be the reserve feerate for a single vault (note that R > A).

The total amount affected to each vault is R * S. This value is divided into the "main" fee-bump output and 7 [1] "rebump" UTXOs. Let Vm be the value of the main UTXO. Let Vb be the value of the rebump UTXO. Let O be the overhead for spending a feebump input. Assuming we are only using P2WPKH this is always 272 Weight Units.

This creates a pool per vault allowing to fee-bump once to get the Cancel above the current market feerate in most cases (cross-linking with #3 , even more probable with an opportunistic cheaper target in the first place), and then to re-bump by thresholds of at least 10sat/vbyte each time.

[1] Arbitrarily chosen but we may want to lower it if adding 10 inputs to the Cancel could get it above the 100k WU limit.

2. Deducing the value to send to the watchtower wallet out of this

Soon :tm:. Unfortunately it seems we'll need communication between the watchtower and the stakeholder's wallet.

JSwambo commented 3 years ago

The total amount affected to each vault is R * S

nit: I think allocated is better than affected

* `Vm = R*S - A*S + O`

I think it should be Vm = A*S + O? This way the main UTxO pays just over the average. Also then we still have Vm + 7Vb = R*S

* `Vb = (R*S - Vm) / 7` -- Should `Vb` be `< O * 3` (current dust for the Bitcoin network) or more largely `< O * R` (dust in the sense that it wouldn't bump the feerate in the worst case scenario) we should create less rebump UTXOs. Similarly, its addition to the Cancel transaction must allow for a direct RBF of it so it must get above the threshold of paying for its own bandwidth, as per bip125. As bumping of 1sat/vbyte is not very useful we apply a factor 10. So we end up with:
  > If `Vb < O * R + S * 10`, create less than 7 rebump UTXOs

I'll try to paraphrase you to check I understand: "if the value of rebump coins would be less than the dust amount, reduce the number of rebump coins such that they Vb is larger and not less than the dust amount". What does O*R represent?

darosior commented 3 years ago

I think it should be Vm = AS + O? This way the main UTxO pays just over the average. Also then we still have Vm + 7Vb = RS

Yes, brainfart, thanks :)

if the value of rebump coins would be less than the dust amount, reduce the number of rebump coins such that they Vb is larger and not less than the dust amount

Yes, sorry if it wasn't clear it was more of a "thoughts path" than a summary on this one :)

What does O*R represent?

O is the size (as described above) of a feebump input. R is the worst case feerate we are covering against. O*R is the fee you are going to pay to include this UTXO as input of a worst-case feerate Cancel transaction.

JSwambo commented 3 years ago

Note that Vm and Vb are dynamic values, since they are based on the average of estimates over, say, the previous 30 days. As the fee-market changes our coin creation will create coins of different absolute amounts. A vault might be allocated a fee-reserve of

fee_reserve = [Vm(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0)]

but, as market conditions change, what is considered a 'sufficient fee-reserve' also changes. If fee_reserve becomes insufficient, and a new feebump coin needs to be allocated, we'll end up with different values in the fee_reserve, e.g.

fee_reserve = [Vm(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb(t=0),Vb=(t=10),Vb=(t=24)]

If we operate with the hypothesis that the fee-market will gradually increase over time, then a coin-selection algo #6 can clean up smaller feebump coins by selecting them for use first. Our other mechanism for clean-up is use our Feebump Tx for both coin creation and some necessary coin cleanup, which we expect to occur approximately with each wallet re-fill.

How do we determine when it is necessary/ effective to clean up a bunch of small feebump coins? If operations are running smoothly, no cancels are broadcast and the fee-market has increased, then at some point the fee_reserve allocated to a vault will fall below the required amount r(v_i,t). We may allocate additional feebump coins to this vault until the Cancel Tx size limit is reached. At this point, it becomes necessary to cleanup small feebump coins.

darosior commented 3 years ago

Also, we definitely need some breathing room on top of the reserve. Otherwise a small market movement with no vault spent beforehand will put all our vaults below the reserve... As mentioned last time during the call, i think we could have an additional rebump UTXO for each vault.

As to when cleanup too small feebump coins, we could have a basic interval (every 1000 blocks, clean up) eventually augmented with heuristics from https://github.com/revault/watchtower_paper/issues/9 . This obviously clashes between the implementation and the paper: we need to settle on a basic one for the first implementation, but still discuss some others in the paper.

JSwambo commented 3 years ago

Also, we definitely need some breathing room on top of the reserve. Otherwise a small market movement with no vault spent beforehand will put all our vaults below the reserve.

I think that the fee-reserve per-vault r(v_i,t) is by definition the breathing room for the moving fee-market. We can define a tolerance range, say, 5% below the fee-reserve, where the stakeholder is warned that the WT needs to be re-filled. When re-filling, the stakeholder pays the updated fee-reserve amount r(v_i,t=later). The actual breathing room is much larger than this tolerance range, assuming our strategy is very conservative (e.g. MAX95Q #10).

We can also have stakeholders pay excess with re-fills, if they expect to be delegating more vaults soon after. But that's a separate matter to our definition of the fee-reserve per-vault and our strategy for a good buffer size.

darosior commented 3 years ago

Yes, i know, but it's not practical. Say they setup their reserve for a 250sat/vbyte MAX95Q, then the next day the MAX95Q is 251sat/vbyte they get an alert "hey your watchtower fee-bumping reserve is low, wtf". Actually it's more of a concern if the reserve is based on an average (and i think it should not), so yes. A tolerance would work too at the expense of our assumptions :/

We can also have stakeholders pay excess with re-fills, if they expect to be delegating more vaults soon after.

Yes, i think having a buffer is sensible and is basically what i meant (should the MAX95Q go up, you take from the buffer before annoying the user).

darosior commented 3 years ago

So @JSwambo experimentations of this algorithm on historical feerate data demonstrated that in some cases Vb > Vm. This is because Vm is rather opportunistic and used to bump the coin to the fee market average, whereas Vb aims to cover the difference between the opportunistic bump and the worst case fee reserve. Therefore when the fee market average feerate for the next block has been low for some time, the difference between the reserve (which is perpetually increasing) and current estimates is so large that Vb can be bigger than Vm. I don't think this is an issue.

JSwambo commented 3 years ago

Let O be the overhead, the fee needed to pay for spending it. Assuming we are only using P2WPKH this is always 272 Weight Units.

If O is in "weight units", then

Vm = A*S + O

should be Vm = A*(S+O).

Let the number of Vb coins per Vm coin be N so that Vb = (R*S - Vm)/ N. Then, you suggested: should Vb < O*R + S*10, reduce N and recompute Vb. However, even for R = max(cummulative Maximum of 95th quantile in last 90 days, Vm) (an extremely conservative value), for early times, occasionally R == Vm. Thus Vb == 0 regardless of the value of N. Moreover, as R increases (it's a cumulative max so it will inevitably), the likelihood of Vb < O*R + S*10 is quite high, and through some testing I found that to be the case even for N = 1. So I think we need to set a lower bound for Vb and not bother with the complexity of adjusting N.

I think Vb >= O*R + S*10 is a reasonable lower bound. So the formula would be

Vb = max((R*S-Vm)/N, O*R + S*10)

The issue I see with that is WTs might require more capital than the fee_reserve_per_vault to create coin distribution of 1:N Vm to Vb coins. We need to keep in mind that the Stakeholder should be able to compute an appropriate re-fill amount without communicating with their WT. Here's some results for that simulation:

CoinSizePlot

darosior commented 3 years ago

should be Vm = A*(S+O).

Right.

for early times, occasionally R == Vm

As discussed many times already, this cannot happen in the early times as we'd have a hardcoded max. However this can happen that Vm gets close to R on new feerate spikes but we should obviously have a better update of R, with a breathing room large enough that we can at least create one Vb coin.

So I think we need to set a lower bound for Vb and not bother with the complexity of adjusting N.

This was already proposed in my post you are quoting. https://github.com/revault/watchtower_paper/issues/5#issuecomment-839863107

Vb = max((R*S-Vm)/N, O*R + S*10)

Looks reasonable to me, but we need to update R too.


So i think the gist is that we need to have larger increases of R when it's updated (as in it can only increase by at least Vb) instead of just setting it to the current Vm.