tradecraftio / tradecraft

Tradecraft integration/staging tree https://tradecraft.io/download
Other
13 stars 9 forks source link

[CheckSig] Add SCRIPT_VERIFY_REQUIRE_VALID_SIGS flag and make standard #25

Closed maaku closed 5 years ago

maaku commented 5 years ago

The SCRIPT_VERIFY_REQUIRE_VALID_SIGS enforces that every signature check passes when the signature field is non-empty (softfork safe, replaces BIP62 rule 7). Fixes #6.

It is still possible for the non-VERIFY forms of CHECKSIG or CHECKMULTISIG to be executed and return false, but you MUST do so by passing in an empty (OP_0) signature. The way this is implemented is different for the two opcodes:

CHECKSIG and CHECKSIGVERIFY will abort script validation with error code SCRIPT_ERR_FAILED_SIGNATURE_CHECK if the signature validation code returns false for any reason other than the passed-in signature being the empty.

CHECKMULTISIG and CHECKMULTISIGVERIFY present a significant challenge to enforcing this requirement in that the original data format did not indicate which public keys were matched with which signatures, other than the ordering. For a k-of-n multisig, there are n-choose-(n-k) possibilities. For example, a 2-of-3 multisig would have three public keys matched with two signatures, resulting in three possible assignments of pubkeys to signatures. In the original implementation this is done by attempting to validate a signature, starting with the first public key and the first signature, and then moving to the next pubkey if validation fails. It is not known in advance to the validator which attempts will fail.

Thankfully, however, a bug in the original implementation causes an extra, unused item to be removed from stack after validation. Since this value is given no previous consensus meaning, we use it as a bitfield to indicate which pubkeys to skip.

Enforcing this requirement is a necessary precursor step to performing batch validation, since in a batch validation regime individual pubkey-signature combinations would not be checked for validity.

Like bitcoin's SCRIPT_VERIFY_NULLDUMMY, this also serves as a malleability fix since the bitmask value is provided by the witness.

Draft: I have found a more compact way to store the multisig bit mask. This PR will not be marked ready to merge until that is implemented, but the rest of the logic can be reviewed now. Also, the unit tests do not pass because they have not been updated yet.

kallewoof commented 5 years ago

4cf1a28 looks good to me.

maaku commented 5 years ago

Thanks. d6b79b8 (which I forced pushed immediately after) is basically the same. I forgot to guard a conditional with a if (require_valid_sigs && ...) check, which the unit tests caught.

kallewoof commented 5 years ago

OK. Yeah I couldn't find the change when I looked. Will recheck post-squash.

maaku commented 5 years ago

Making the SCRIPT_VERIFY_REQUIRE_VALID_SIGS flag standard now causes the multisig signing tests to fail, since they hard-code a zero dummy value. Those tests and the signing code will have to be updated once the optimized combinatorial skipped_pubkeys encoder is written.

maaku commented 5 years ago

@kallewoof : Here is the delta from what you reviewed: https://github.com/tradecraftio/tradecraft/compare/4cf1a28..d6b79b8

kallewoof commented 5 years ago

Neat! LGTM.

maaku commented 5 years ago

Rebased to upstream. Confirmed same code diff with:

$ git diff a33f17416..d6b79b856 >a $ git diff 2d6203ebd..3530147f6 >b $ diff a b $

maaku commented 5 years ago

This is now the only technical issue remaining for 0.10, and it is pretty large. It's not necessary that REQUIRE_VALID_SIGS is included in 0.10, only that NULLDUMMY is not added to the STANDARD_SCRIPT_VERIFY_FLAGS. I'm going to open a separate PR to make this change, and then push this PR and associated issue to the rebase/0.11 project.

kallewoof commented 5 years ago

I think you screwed up the rebase? This now has 767 files changed and 21k lines.

Edit: or is this rebasing the 0.10 branch into this PR? I'm not sure what's going on.

maaku commented 5 years ago

The 0.10 branch changed once I started squashing commits for the release process. Once the v10.4 release is made I can rebase this on top.

It seems that really screws up Github's ability to figure out what's going on. It looks like there's still some work to be done in figuring out how to maintain this project as patches on top of bitcoin without confusing GitHub unnecessarily.

kallewoof commented 5 years ago

It may be worth it to try

git checkout 'require-valid-sigs@{2019-03-03T23:31:48Z}'

and experiment with the rebase to see if you're doing something weird. I have gotten results like this in the past, but it's usually because I do something wrong. (For some reason can't recall an example of such but..)

Edit: I wrote the wrong branch. Actually I think the issue here is that the rebase-0.10 branch does not have many of these commits, and this PR is now trying to add them on there..?

maaku commented 5 years ago

v11 released without this; bumping to v12

maaku commented 5 years ago

Rebased against the current 0.11-based development branch, and squashed into a single commit.

maaku commented 5 years ago

It's been hard to find time to sit down uninterrupted for a few hours to implement the final optimization that this pull request is blocked on, and reflecting on this fact is making me realize that this is a micro-optimization that is probably best not to include. So I'm marking this PR as merge-ready and in need of a final review ow that it's been rebased and squashed, although there's still some unit tests to be written.

I'd also appreciate some feedback on dropping the optimization, and whether that is the right choice. The idea, in short, is that the bitmap is constrained and therefore an inefficient encoding from an information theoretic sense. A CHECKMULTISIG with k public keys and n signatures requires that either the bitmap is all-ones (expected failure, only possible with non-VERIFY semantics), or that exactly k-n bits are set for the missing signatures. There are k-choose-n possible ways of setting those bits. So to compress the bitmap we define a standard enumeration of these possibilities, preferably with an exploitable bias (so sorting the list of pubkeys by the prior probability of their availability at signing time results in smaller encodings most of the time), and write a method to do a 1:1 mapping between numbers and bitmaps. For example, liquid has a 11-of-15 functionary network. This requires a 15-bit bitfield, which is a two byte push. However there are only choose(15, 11) = 1365 possible ways of dropping 4 signatures. Encoding an ordinal number representing the Nth bitmap fits in a single byte push in much more cases, and if all signatures are available then it's possible to choose which to drop in such a way as to make it a single opcode, e.g. OP_0.

However let's be clear: we're talking about a 20- to 30-line (guesstimate) complex function to convert a max 20-bit bitfield into a compressed form that saves at most a byte or two of witness space for scripts already measuring in the kilo-weight size. And it takes something straight forward and trivial for anyone to implement (a signature-absent bitfield) and turns it into a complex exercise in coding theory that in reality means some standard method for doing the conversion will have to be written in a bunch of languages and take up space in the firmware of embedded signing devices.

My inner perfectionist bristles at the idea of choosing a space-wasteful consensus-critical encoding. But when I think about this critically, I don't think I can in good conscious push this complexity onto the ecosystem for such a meagre gain in encoding efficiency. So I no longer plan on implementing this optimization, which means this PR is feature-complete. It still needs unit tests, which are no longer blocked on the encoding change, but that's it.

kallewoof commented 5 years ago

I assume if this optimization would turn out to be desirable enough that we wanted it later, it could be introduced in a new witness version, right? If so, I think it's safe to leave optimizations out of it and keep things ~stupid~simple for now.

maaku commented 5 years ago

@kallewoof The question I think is whether this optimization is worthwhile at all. In this circumstance it saves only a byte or two at best, and typically nothing since we're encoding whole bytes at a time. So I'm now feeling swayed by the argument that protocol simplicity should be preferred when the gain otherwise is so meagre.

kallewoof commented 5 years ago

@maaku Yeah, I agree with this statement. I was just pointing out it may perhaps be optionally tacked on at a later time, probably.

maaku commented 5 years ago

Done, and ready for merge. Unit tests have been updated (and a few added), and the signing code and signature combining code modified to produce the multisig hint value.

In the process of fixing and writing unit tests, the logic involving the multisig hint value was encapsulated out into its own class, the MultiSigHint. This allows for better code reuse and clearer code overall.