Open halseth opened 1 year ago
Note: I initially tried to commit to the remaining HTLCs in the taproot tree itself, but found it very hard to verify this from the script, since it required (AFAICT) re-deriving and inspecting the whole taproot tree.
Effectively, I think you should be able to leverage today Taproot tree capabilities to gather the HTLC-claim in a single taproot leaf, and all the remaining timeout in segregated leaves with something-like: https://gist.github.com/harding/a30864d0315a0cebd7de3732f5bd88f0 However, I believe you would need some compiler, further advanced than what Miniscript is presenting today.
I think another covenant direction is the primitives of https://coinpool.dev/v0.1.pdf, as what you're describing is an on-chain payment pool with only two parties (without eltoo update transactions at all), where the withdraw transactions are instead unrolling second-stage HTLC-transactions. Ideally, you still would like to aggregate the HTLC-claims witness in a single one and I believe you will end up on the observations of 7.4.
@ariard Thanks for reminding me about coin pools, you're right there's a great deal of overlap between the two use cases!
However, where these two situations differ, is that in the LN force close scenario, the second channel party is (assumed to be) offline. The online participant must be able to claim its HTLCs regardless, and preferably with a single on-chain transaction.
I think that a generalized OP_MERKLESUB
could work for the HTLC scenario: You encode every HTLC claim in its own tapleaf, then spend multiple leaves at the same time by presenting the preimages. The resulting output must be the same taproot tree with these leaves removed (in contrast to the OP_MERKLESUB
where only a single leaf is spent and removed).
After seeing @bigspider's latest post on the ML, I wanted to try to write up this proposal using the opcodes originally proposed for MATT (with the addition of fetching the input taproot). To me it seems that having the possibility to commit to and inspect arbitrary data in the pubkey is an extremely powerful idea!
High level idea is to use OP_CHECKOUTPUTCOVENANTVERIFY to commit to the HTLC amounts and hashes, such that a spending transaction must commit to the amounts of the HTLCs not yet claimed (we set the amounts of the claimed HTLCs to 0, removing the incentive to claim them again, while we can reuse the same script tree).
A rough script outline is below. The alt stack is used to keep track of three values during the execution:
# Initialization
0 OP_TOALTSTACK # push claimed value to alt stack
0 OP_TOALTSTACK # push remaining hashed-value accumulator to alt stack
0 OP_TOALTSTACK # push previous hashed-value accumulator to alt stack
# Repeated n times for each <amt> and <preimage>
######### Begin Repeated Section #########
OP_DUP # duplicate amount
OP_FROMALTSTACK # get previous hashed-value accumulator from alt stack
OP_SHA256CAT # add the <amt> value to the previous hashed-value accumulator
OP_SWAP OP_ROT # move <amt> to top, then <hash> to top of stack
OP_HASH160 h_i OP_EQUAL # check if top stack element is preimage to hash_i
OP_IF # if correct preimage was presented
OP_FROMALTSTACK # get remaining hashed-value accumulator alt stack
OP_SWAP # swap value and accumulator on stack
OP_FROMALTSTACK # get claimed accumulated value from alt stack
OP_ADD # add HTLC amount
OP_TOALTSTACK # put value accumulator back on alt stack
0 # put zero value on stack
OP_SWAP # swap 0 and accumulator on stack
OP_SHA256CAT # add the zero value to the remaining hashed-value accumulator
OP_TOALTSTACK # put remaining hashed-value accumulator back on alt stack
OP_ELSE # not preimage, ignore
OP_FROMALTSTACK # get remaining hashed-value accumulator from alt stack
OP_SHA256CAT # add <amt> to remaining hashed-value accumulator
OP_TOALTSTACK # put remaining hashed-value accumulator back on alt stack
OP_ENDIF
OP_TOALTSTACK # put previous hashed-value accumulator back on alt stack
######### End Repeated Section ##########
# we verify the input
OP_FROMALTSTACK # get previous hashed-value accumulator alt stack
32 <X> # push pub key to stack
OP_CHECKINPUTCOVENANTVERIFY # verify that the amounts [a_1, ..., a_n] were provided correctly in the witness
# On the alt stack we now have the new committed amounts, and claimed amount.
# verify the output key.
OP_PUSH_TAPROOT # push taproot of current input to stack. This enforces same script on new output as input.
OP_FROMALTSTACK # get remaining hashed-value accumulator from alt stack
32 <X> # push pub key to stack
0 # push output index 0
OP_CHECKOUTPUTCOVENANTVERIFY # remaining hash-value accumulator must be tweak of output key
# verify output amount
OP_FROMALTSTACK # get claimed value
OP_INSPECTINPUTVALUE # get input amount
OP_SUB64 # input amt - claimed amount
OP_DUP # duplicate output amount
<dust limit> OP_GREATERTHAN # only create output if greater than predetermined dust limit
OP_IF # if output amount is greater than dustlimit
0 OP_INSPECTOUTPUTVALUE # push value of output at index to stack
OP_EQUALVERIFY # check that outputamt == input-claimed
OP_ENDIF
Obviously there's a few things missing here for it to be complete: timelocks for received HTLCS, revocation key path, spender signature check and a timeout script path (and probably more), but I think the idea is clear.
The big advantage of this over the current HTLC output format is commitment size: regardless of how many HTLCs in flight, there is only a single HTLC output on the commitment transaction. This could potentially eliminate the HTLC slot limit, making jamming much more costly.
Any thoughts or suggestions are welcome!
Yes so I’ve been browsing @bigspider latest post on the ML though I still have to inspect the implementation on bitcoin-inquisition.
If the claim is correct than CICV/COCV generalizes both to arbitrary scripts (taptrees) and state machines and dynamic and witness-dependent data embedded can be inspected on the stack, I could see how your HTLC outputs set can be formalized as a hash-accumulator where new HTLC outputs are added/removed from the set.
For the timelocks I think you can decompose the embedded data between hash-accumulator
and sum of the nLocktime
field ? Note there is already OP_CHECKLOCKTIME_VERIFY to inspect the nLocktime
field. And for the revocation key path it doesn’t seem an issue anymore after ANYPREVOUT as this should be settled on the previous update transaction rather than the settlement one (at least if you follow coinpool paper dissociation between update transaction settling the correct state and withdrawal transaction settling the correct taproot tree contract state).
Additionally, I wonder if the lack of checks than the removed HTLC outputs are sent to the correct HTLC outputs scriptPubkeys/amounts could suffer from adversarial malleability ?
For the advantages, I definitely agree we could have some order of magnitude improvements in terms of HTLC throughput where a channel could have far more than 483 outgoing HTLCs though I think you would have a trade-off in term of fee-bumping reserves worst-case scenario if you have to unroll the whole tree ?
For the advantages, I definitely agree we could have some order of magnitude improvements in terms of HTLC throughput where a channel could have far more than 483 outgoing HTLCs though I think you would have a trade-off in term of fee-bumping reserves worst-case scenario if you have to unroll the whole tree ?
Yes, with the current script outline that's correct: I would pay O(n) in fees even though I only wanted to claim m of the n HTLCs. I have some ideas on how to get around this, so you would pay only O(m) in fees. That way you could choose to go on-chain and claim only the HTLCs that make sense economically.
An update on this: I've added OP_CHECK[INPUT/OUTPUT]CONTRACT
to btcd
and added a demo of a script using these in Tapsim: https://github.com/halseth/tapsim/tree/matt-demo/examples/matt
(It's only a POC and is still missing the things mentioned in a previous comment).
This is an actual working script and it enforces the reveal of valid preimages and an output that pays to the same taptree with the remaining payment hashes committed.
The big difference here is that the HTLC set is committed as a merkle root, and it is easy to see that claiming m-of-n
HTLCs will lead to an onchain footprint of O(m * log n)
.
This is pretty interesting I think, since it can let a node "choose" during a force close what the on-chain footprint should be. If fees are high, the node can just not claim the uneconomical HTLCs (with the assumption that it would also be uneconomical to spend the timeout path). This is in contrast to the current commitment format, where force close footprint will always be O(n)
(one output for each HTLC).
One interesting area of research from here IMO is how this could let us increase (or remove) the 483 HTLC slot limit, potentially make it much harder for an attacker to jam a channel.
I’ve been working on a draft proposal for aggregating Lightning HTLCs in a single output on the commitment transaction. The idea is to avoid the commitment getting large when forwarding a lot of HTLCs, and making it cheaper to claim them after a force close.
To make this work I need some sort of covenant, and I’m wondering if someone has any opinion on which proposal would be best to pursue.
Let’s say the output script and amount encoding the active HTLCs is on the form
[hash1 amt1 … hashn amtn] <script>:sum
, where the<script>
part can be static as long as you know the numbern
.Simple example
Now we want to claim
htlc1
on chain, and present the preimage forhash1
. We must enforce that the new output script is on the form[hash2 amt2 … hashn amtn] <script>:sum-amt1
. (Note thathtlc1
and its amount has been removed, everything else stays the same. Since we presented a valid preimage we can do with the subtracted amount whatever we like).Requirements
First, we need a check for the output amount. This has been discussed in other settings, and we most likely also need a way to do 64 bit math in the script.
Next, we need a way to verify that the new taproot output is based on the remaining hashes. One way to do this is to tweak the taproot internal key with all the hashes and amounts, and that way commit to them. This way we could keep this “dynamic commitment data” in the internal key, while the actual taproot tree can be mostly static (it just commits to the static script). This would need some way of inspecting the key (or verifying arbitrary data against it) when spending.
OP_CHECK[INPUT/OUTPUT]COVENANTVERIFY
from https://github.com/ariard/bitcoin-contracting-primitives-wg/issues/25 could be a good candidate for this.Note: I initially tried to commit to the remaining HTLCs in the taproot tree itself, but found it very hard to verify this from the script, since it required (AFAICT) re-deriving and inspecting the whole taproot tree.
Spending
Having these capabilities a spender would claim the HTLC by presenting the full list of HTLCs and amounts, as well as the preimages for the HTLCs to be claimed.
Help wanted
The main question is whether this way of commiting to the HTLCs can be done elegantly with any of the covenant proposals.
Also, any ideas or pointers would be greatly appreciated, as well as any feedback or questions!