cosmos / cosmos-sdk

:chains: A Framework for Building High Value Public Blockchains :sparkles:
https://cosmos.network/
Apache License 2.0
6.15k stars 3.56k forks source link

Protobuf Transaction Signing #6078

Closed aaronc closed 4 years ago

aaronc commented 4 years ago

Problem Definition

The Cosmos SDK has historically used Amino JSON for signing of transactions whereas Amino binary is used for encoding. During the SDK's migration to protobuf, we had made the preliminary decision to use a canonical protobuf JSON encoding for signing as described in https://github.com/regen-network/canonical-proto3.

As a consequence of #6030, the Cosmos SDK is moving in the direction of using protobuf's Any type for the transaction encoding and signing format. In this discussion, a number of participants have asked that we revisit the transaction signing discussion. The options that have been discussed/are available for consideration are outlined below.

It should be noted that it is theoretically possible to support more than one of the following options via an enum flag on the signature. Whether that should or should not be done is a related question.

Proposals

Feel free to suggest updates to these alternatives in the comments

(1) Protobuf canonical JSON

The official proto3 spec defines a canonical mapping to JSON. This is not really deterministic, however, so we define a canonical encoding on top of that using https://gibson042.github.io/canonicaljson-spec/.

Pros:

Cons:

(2) Protobuf canonical binary

This involves re-encoding the protobuf used for transaction encoding canonically for signing - meaning that fields must be ordered and defaults omitted. This is how Weave does signing.

Pros:

Cons:

(3) Protobuf binary as encoded in transaction

This simply uses the protobuf encoding as broadcast in the transaction. This becomes a little easier for both signature creation and verification because of Any (although it could be done without Any too). Because Any wraps the raw bytes of the sdk.Msg, it is pretty easy to use these same exact bytes for signing and verification and only require that SignDoc itself is encoded canonically, rather than every Msg 

Pros:

Cons:

(4) Amino JSON

This is how tx's are signed currently. The reason this is under consideration is because breaking Amino JSON signing would break many clients especially the ledger app. Transactions could still be encoded with protobuf and the /tx/encode endpoint could accept Amino JSON and return protobuf rather than amino binary for tx broadcasting - some upfront work would be required to enable this but it is possible

Pros:

Cons:

(5) Custom Proto JSON

Extend (1) to support custom encoding of certain types like bech32 addresses

Pros:

Cons:


Related Question: Should we allow multiple signing algorithms?

Theoretically we can allow clients to use multiple signing algorithms and indicate which one they used as an enum flag on the Signature struct.

Pros:

Cons:

For Admin Use

aaronc commented 4 years ago

I do want to say that after considering the above, my personal preference is to go with approach (3) (signing the raw binary) as default and to allow (4) Amino JSON for a limited period of time while clients transition.

(3) seems to offer the simplest UX for clients as well as the least number of malleability issues. Given that Any should minimize user errors from manually manipulating .proto files, I don't see that as as big of an issue as with Amino or the proto oneof approach.

Supporting (4) temporarily through a flag doesn't seem to introduce any new issues that aren't already here, and has the big benefit of not disrupting wallets, exchanges, etc. that don't have time to transition overnight.

zmanian commented 4 years ago

Thank you for writing this up @aaronc !

Here are the problems I see with 3. Protobuf deserializing in embedded/wasm environments is extremely difficult. This is why we are actually avoiding protobuf in Armistice.

This basically means that you can't decode the transactions inside your Trusted Computing Base and you need to extend your trust boundaries around signing to enclude another system that will deserialize your proto bufs for you. This is going to be a disaster for practical security.

Strongly prefer 4 and 1. Because AMINO Json serialization in good enough form is pretty easy to implement and widely available. proto3 cannonical json actually looks okay but not really sure how wide spread support for this is. I think prost doesn't support this?

aaronc commented 4 years ago

@zmanian why is protobuf deserialization in embedded environments so hard? Have you looked at stuff like https://github.com/nanopb/nanopb? Even if that didn't work, I don't think a hand decoder should be too hard. The JSON stuff in the ledger app appears to be mostly hand coded.

I'm fine with enabling Amino JSON for compatibility, but we should have an alternative going forward that just requires .proto files.

tarcieri commented 4 years ago

Regarding this:

Theoretically we can allow clients to use multiple signing algorithms and indicate which one they used as an enum flag on the Signature struct.

...it's reminiscent of the alg field in JOSE JWT/JWS, which has been implicated in numerous security vulnerabilities (e.g. this recent one), namely via an ongoing history of implementation bugs which leverage an attacker-controlled alg into tricking a verifier into using the wrong algorithm.

An alternative is to make alg a property of public keys instead of signatures, ala X.509 Subject Public Key Info (SPKI).

This gives you a strong binding between an algorithm and a public key, known a priori to the verifier, and with that, a signature can name the "SPKI" (or SPKI hash)

zmanian commented 4 years ago

Some discussion here https://www.reddit.com/r/rust/comments/aequik/is_there_a_no_std_compatible_protobuf_library_out/

My guess would be that nanopb would be too big for the ledger.

Adding a C deserialization library seems very much against our goals of minimizing C in the TCB.

zmanian commented 4 years ago

Also pretty strongly support the idea of binding the serialization system to the public key to support forward migration!

tarcieri commented 4 years ago

@aaronc

why is protobuf deserialization in embedded environments so hard?

The Protobuf Schema language lacks descriptions for the (maximum) size of variable-length fields.

This means it isn't possible to codegen a struct with a fixed-sized APDU-like structure which is typically used in heapless (e.g. microcontroller) environments.

Libraries which do support Protobufs in embedded environments therefore tend to work a level of abstraction below a typical Protobuf library which generates message types directly from the schema. This is also bad from a code size perspective, because it punts all of the work of size-checking the underlying fields to the end user.

aaronc commented 4 years ago

...it's reminiscent of the alg field in JOSE JWT/JWS, which has been implicated in numerous security vulnerabilities (e.g. this recent one), namely via an ongoing history of implementation bugs which leverage an attacker-controlled alg into tricking a verifier into using the wrong algorithm.

Okay, so that's definitely a consideration for supporting multiple algorithms. I do want to note that we're talking about supporting maybe 2-3 algorithms and none wouldn't be one of them. JWT seems to support ~20.

Also pretty strongly support the idea of binding the serialization system to the public key to support forward migration!

Would there be an option to change the binding to a newer system?

https://www.reddit.com/r/rust/comments/aequik/is_there_a_no_std_compatible_protobuf_library_out/

In that thread it seems there are at least 2 users who hacked together a way to do it. So definitely possibly, just not standardized in a library yet.

The Protobuf Schema language lacks descriptions for the (maximum) size of variable-length fields.

This means it isn't possible to codegen a struct with a fixed-sized APDU-like structure which is typically used in heapless (e.g. microcontroller) environments.

How is this any different from JSON? Strings/byte arrays have no max length in JSON either. If you had fixed sized arrays, you would need to truncate strings with either json or pb. But I'm not even sure why you'd need to do that sort of copying into a struct. From my quick glance at the ledger app source code, it seems like one of the key things it's trying to do is display info in fields to users. You could do the same thing in protobuf with zero allocation. Just iteratively navigate through the message like you would with JSON and keep track of what level of nesting you're at (which should be possible with a fixed depth). Strings should be easier to extract from protobuf because you don't need to copy to remove escape chars. And addresses need a small array to convert from bytes to bech32.

Anyway, I definitely can understand the concern of not wanting to rewrite a bunch of firmware, thus my support for keeping an Amino JSON option. But these problems with protobuf to me seem solvable...

tarcieri commented 4 years ago

How is this any different from JSON?

JSON and Protobufs are both problematic in this regard. There are some optimizations you can do for JSON as a proper context-free grammar, but ultimately on platforms with low memory the main one is ensuring either JSON or Protobufs have a well-known fixed-sized structure.

Just iteratively navigate through the message like you would with JSON and keep track of what level of nesting you're at

I've been working on this sort of pushdown automaton in Veriform, as it were.

For some extreme low-end embedded environments, even that sort of thing is too much (our target is a 500+MHz Cortex-A environment, whereas the problematic environments are much lower clocked Cortex-M ones with much smaller stacks)

ethanfrey commented 4 years ago

I want to jump in with an argument against Amino JSON. Basically, if we keep using that, we will need to support that tooling on all platforms. Much less work than binary Amino, but over 2 years there has been amazing little work by the core team to port any of Amino, and assuming all this magically happens "from the community" is wishful thinking. If we need to stay with Amino JSON, then I would say that porting client side libraries in major languages (not just JS and Rust, but say Java/Kotlin and ObjC/Swift at the minimum, ideally python and some more) comes into scope as part of this migration. If we can use tooling that already works out of the box in all these languages, than that is much easier.

The strong valid criticism I see above is that it is hard to parse Protobuf in an HSM.

My guess would be that nanopb would be too big for the ledger. Adding a C deserialization library seems very much against our goals of minimizing C in the TCB.

I encourage you to look at the Ledger app that Juan (same dev who wrote Cosmos Ledger app) wrote for IOV: https://github.com/iov-one/ledger-iov There isn't too much code there for the parsing. Actually grabbing a few fields from a predefined protobuf format is quite easy, and doesn't require parsing arbitrary structs into memory.

It parses the Protobuf signing format we use for transactions and displays it to the users. It didn't take him that long to do it (I believe less than the original Cosmos app). I just want to say that using protobuf is not some new and untested idea, but was used for over 2 years now and on a mainnet. That IOV has no clue about business and marketing doesn't mean the code there has no technical merit. I advise you to borrow liberally.

aaronc commented 4 years ago

JSON and Protobufs are both problematic in this regard. There are some optimizations you can do for JSON as a proper context-free grammar, but ultimately on platforms with low memory the main one is ensuring either JSON or Protobufs have a well-known fixed-sized structure.

Okay, well sounds like this problem would only be solved by something like ASN.1 which isn't really an option. But either way sounds like we can actually deal with the memory problem with an event driven parser as opposed to decoding into a struct. So then the issue becomes code size, right?

Here I can see one potential argument in favor of JSON. JSON is self-describing so if you don't have the schema, you could still iteratively display each raw field on the JSON on a device like the ledger. To do that in protobuf, you would need to include the schema for every type or limit support to just a few types to limit code size. Is that one of the tradeoffs you're seeing @tarcieri ?

One point I will grant to Amino JSON is that it bech32 encodes addresses, pub keys, etc. Say you wanted to iteratively display every element of a JSON object in the ledger without the schema, with protobuf you would get base64 whenever you don't know the schema. Maybe that's not a hard problem to solve, but I do see it as a valid good thing about Amino JSON.

To do this with protobuf we would need a custom JSON serialization format which maybe indicated the bech32 type of bytes fields with an extension (i.e. bytes key = 1 [(cosmos_proto.bech32_type) = "valpub"]). That's maybe not a bad idea to include in the .proto files anyway, but as @ethanfrey noted the more custom work clients need to do the bigger the burden.

Anyway, I will include this as a new option (5) and update the pros and cons of the other options to reflect this discussion.

I do still think there is an elegance to approach (3) and if it did work for embedded devices, maybe following the example in https://github.com/iov-one/ledger-iov, that might make everyone's lives easier.

zmanian commented 4 years ago

This self describing nature of Javascript allows the Cosmos app to work any number of chains out of the box.

ethanfrey commented 4 years ago

This self describing nature of Javascript allows the Cosmos app to work any number of chains out of the box.

That and the fact that they all use a superset of the schema the ledger app understands.

It checked for eg. .msgs[0].type == "cosmos-sdk/send" and then .msgs[0].data.amount[0].amount for example. If we encoded it in JSON but with different keys, this wouldn't work.

Note that with option (3), protobuf is also self-describing to a degree. At least you can check the type they claim and will not mix up cosmos-sdk/send and cosmos-sdk/burn. It will not know how to display any type that it was not compiled for. But then again the JSON parser wouldn't either - it can just display the raw JSON to the user. Is that mode actually used (display raw JSON of sign bytes to end user). If this is a common use-case (pass raw bytes to the end user to interpret), then there is a big bonus for JSON. Otherwise, I don't see how self-describing helps an app that checks hardcoded fields - it just helps avoid mixups (like the Any type field does)

iramiller commented 4 years ago

If this is a common use-case (pass raw bytes to the end user to interpret), then there is a big bonus for JSON.

I want to caution against weighting this case too highly in the context of a generic user interface. A user interface that is made up of simply printing out the json key/value pairs without understanding the underlying message format will yield a strictly poor user experience for most "end" users. I feel like there is an important distinction between the audience of developers using the system and the users the developers are intending to support. The developers should be familiar with Proto and the tooling required to deal with it ... so the Json step is strictly speaking an extra effort that may or may not support the needs of the users the developers are working for.

tarcieri commented 4 years ago

@iramiller

I want to caution against weighting this case too highly in the context of a generic user interface. A user interface that is made up of simply printing out the json key/value pairs without understanding the underlying message format will yield a strictly poor user experience for most "end" users.

To further emphasize this, I believe Ledger has required moving from displaying raw JSON to a UI which extracts, displays, and confirms values in the message for exactly these reasons.

@aaronc

Okay, well sounds like this problem would only be solved by something like ASN.1 which isn't really an option.

For what it's worth, ASN.1 DER solves this problem poorly as well. Most embedded implementations of DER are actually BER parsers that don't even verify the BER is canonical (and therefore DER).

The most embedded friendly formats follow an "APDU"-like structure (i.e. fixed-sized fields everywhere). You can get similar properties out of either Protobufs or JSON if you ensure all of the fields in either a message are constant-length (by using e.g. fixed integer types and fixed-length bytes fields with Protobufs).

aaronc commented 4 years ago

The most embedded friendly formats follow an "APDU"-like structure (i.e. fixed-sized fields everywhere). You can get similar properties out of either Protobufs or JSON if you ensure all of the fields in either a message are constant-length (by using e.g. fixed integer types and fixed-length bytes fields with Protobufs).

@tarcieri Would you consider Cap'n Proto embedded friendly? (Not that it's an option anytime soon...)

tarcieri commented 4 years ago

Not particularly. Cap'n Proto is significantly more complicated than Protobufs (see for example message segments and inter-segment pointers)

webmaster128 commented 4 years ago

Great stuff here, everyone. A few additional 👍 / 👎 for the main list that you are free to merge somehow:


(1) Protobuf canonical JSON

Pros:

Cons:

(2)

Pros:

Cons:

~(3)~

~Cons:~ ~- Impossible due to circular dependency: Protobuf binary as encoded in transaction contains signatures but we don't have the signature(s) yet.~

(4)

Cons:

(5)

Pros:

(multiple signing algorithms?)

Cons:


@zmanian if there was a JSON document that is signed, would you expect a JSON->proto conversion to be possible, i.e. you only operate on JSON and assume this can be translated back to the transaction format understood by Tendermint (proto binary). Or would it be sufficient to create a JSON document from a proto document (one way function proto->JSON), which is then sent to chain?

Amino allows two-way mappings, but this is significantly harder to get right than one way mappings.


While IOV's Ledger app is great for IOV's use cases, I think it is important to note that as of now it only supports a single message type (with ongoing work to add a handful more). Chain ID and address prefix are compile time constants with a just a boolean testnets/mainnet flag.

aaronc commented 4 years ago

(3)

Cons:

  • Impossible due to circular dependency: Protobuf binary as encoded in transaction contains signatures but we don't have the signature(s) yet.

Not true. Maybe re-read how I framed it and look at how Any is encoded. I never suggested signing the transaction, just the SignDoc which can contain the exact same pre-encoded Any msg's as the transaction.

webmaster128 commented 4 years ago

Sorry @aaronc, you're completely right. What would probably help (at least for me) is to repeat in the top what we are talking about: the ecoding of the SignDoc structure from ADR 20 from master plus the oneof-to-Any change similar to https://github.com/cosmos/cosmos-sdk/pull/6081.

zmanian commented 4 years ago

Just

To further emphasize this, I believe Ledger has required moving from displaying raw JSON to a UI which extracts, displays, and confirms values in the message for exactly these reasons.

My take here is that having schema aware signers should be an enhancement of the baseline signing experience.

Prior art in Ethereum, Bitcoin etc is that if your signer isn't aware of the schema you are signing then you are signing pretty much opaque bytes. By using json as the signing target, you get an enhanced experience if the signer is aware of the schema and fall back to something somewhat human readable.

I can imagine a format that is easier to implement in Rust and other languages than Amino JSON. Like this is sort of awkward https://github.com/iqlusioninc/deep_space/blob/develop/src/canonical_json.rs#L9-L34

But also aprprox 10 LOC to implement.

I'm generally in favor of 1 or 4.

@zmanian if there was a JSON document that is signed, would you expect a JSON->proto conversion to be possible, i.e. you only operate on JSON and assume this can be translated back to the transaction format understood by Tendermint (proto binary). Or would it be sufficient to create a JSON document from a proto document (one way function proto->JSON), which is then sent to chain?

Amino allows two-way mappings, but this is significantly harder to get right than one way mappings.

It's totally fine if you need a protobuf schema to turn the json into bytes on the wire.

The general pattern is that singers are resource constrained and difficult to update settings and serializers are much more flexible.

webmaster128 commented 4 years ago

It's totally fine if you need a protobuf schema to turn the json into bytes on the wire.

@zmanian okay, that's level 1 independence. Level 2: what if it was not possible at all to map back JSON->proto?. Instead you must use the proto doc from the composing environment plus the signature:

    p2p network            Composing environment      Signing environment
------------------        -----------------------    ---------------------

                            unsigned tx proto  --------->  SignDoc (JSON)
                                     |                          |
                                     |                          | sign
                                     |                          |
                                     v                          v
 signed tx bytes  <----------  signed tx proto  <-----------  signature
                    serialize 
aaronc commented 4 years ago

I'm generally in favor of 1 or 4.

So I'd like to take 4 (Amino JSON) off the table as a long term solution. Short-term, sure. But we you want something like Amino JSON, let's consider 1 or 5 where all of the information is in the .proto files.

aaronc commented 4 years ago

@webmaster128 added your pros/cons to the main list (except the Amino JSON con which was already there worded differently)

zmanian commented 4 years ago

I love this diagram

The composing environment is expected to know how to take an unsigned prototx and a signature and turn it into signed bytes..

The verifier is expected to know how take signed bytes, generate a SignDoc and verify a signature.

  chain/verifier               p2p network            Composing environment      Signing environment
-----------------  ------------------        -----------------------    ---------------------

                                                                      unsigned tx proto  --------->  SignDoc (JSON)
       verifier                                                                           |                          |
              ^                                                                           |                          | sign
              |                                                                            |                          |
              |                                                                            v                          v
   SignDoc (JSON) <------ signed tx bytes  <----------  signed tx proto  <-----------  signature
                                                            serialize 
webmaster128 commented 4 years ago

I love this diagram

Glad to hear that.

Where I am heading to is: proto->JSON serialization is going to be non-trivial, but I am sure it can be done as described in (1), (4) or (5). The reverse operation (deserializing JSON->proto) however is specified very openly, allowing all kind of JSON variants that lead to the same proto document. This starts with allowing both numbers and strings to deserialize to int32, fixed32, uint32, int64, fixed64, uint64 ("Either numbers or strings are accepted.") and just gets more complicated when allowing different RFC3339 timezones as an input for proto's Timestamp ("Offsets other than "Z" are also accepted."; how to handle perfectly valid RFC3339 leap seconds?). I'm not saying multiple JSON represenations that decode to the same SignDoc are necessarily insecure. But if this mapping needs to be supported, we'll have to do much more work specifying all the edge cases. (I don't buy any getting feature X from library Y for free).

When the JSON representation is only used for signing, we lose the current flow of broadcasting signed JSON to the REST server, which is a good thing in my opinion. I believe a client (tx compose and breadcasting environment; no privkey here) should be able to operate on proto, given a Cosmos specific wrapper around a general purpose proto lib. But I want to make sure there is consensus on this.

aaronc commented 4 years ago

I do want to re-iterate that there is something pretty elegant about (3) - just signing the raw binary.

All of the JSON solutions including the standards-based approach (1) require both a) a fair amount of additional client library support and b) substantial auditing to check for edge cases and malleability issues.

It seems that the biggest benefit of the JSON solutions is that we could just show raw JSON to users of the ledger if the ledger app doesn't have the full proto schema. But this convenience does come at a cost elsewhere.

With approach (3), you have both the least surface area for transaction malleability issues and the easiest implementation for composing and verification environments.

For hardware signing environments, there is going to be complexity whichever approach is used. Is the benefit of being able to show raw JSON as a fallback worth all the additional complexity elsewhere?

zmanian commented 4 years ago

yes the whole strategy of signing the raw binary produces a system that is far too cumbersome to extend.

an isolated signing environment brings little benefit if you don't have access to secure display to confirm what you are signing.

The weak link in blockchain protocols are the humans that interact with them.

I can't emphasize enough that I am overwhelmingly opposed to signing non-self describing dataformats.

webmaster128 commented 4 years ago

I'm completely with @aaronc here (and @iramiller and others). Whatever I write about JSON signing is to be interpreted as If JSON signing was required, how would it look like?. From the beginning I favoured choosing one, either commit 100% to the protobuf document (i.e. sign native byte representation of the protobuf doc) or commit 100 % to the JSON document (i.e. store JSON in transaction history). Any translating between the two will cause friction for all implementations as well as a serious specification overhead.

For historic reasons there is a general purpose Ledger app "Cosmos (ATOM)" that seems to sign arbitrary message types and seems to work for any chain. But is this set in stone? Does this make sense at all? We already spoke about the poor user experience when displaying a JSON on a Ledger. How confident can you be signing a medium complex message like

{"account_number":"4","chain_id":"testing","fee":{"amount":[{"amount":"12500","denom":"ucosm"}],"gas":"500000"},"memo":"Create an ERC20 instance for JADE","msgs":[{"type":"wasm/instantiate","value":{"code_id":"1","init_funds":[],"init_msg":{"decimals":18,"initial_balances":[{"address":"cosmos1pkptre7fdkl6gfrzlesjjvhxhlc3r4gmmk8rs6","amount":"189189189000000000000000000"},{"address":"cosmos17d0jcz59jf68g52vq38tuuncmwwjk42u6mcxej","amount":"189500000000000000000"}],"name":"Jade Token","symbol":"JADE"},"label":"JADE","sender":"cosmos1pkptre7fdkl6gfrzlesjjvhxhlc3r4gmmk8rs6"}}],"sequence":"3"}

without tooling to interprete and display this message? So I wonder, is a hardware signer really supposed to work for arbitrary messages?

The other thing I'd like to question is the use of a single Ledger app for multiple chains. E.g. Enigma uses the Cosmos Ledger app, but is that a good idea? The app uses the HD derivation path m/44'/118'/0'/0/a where 118 is the coin index for ATOM, so this is an app designed for signing on the Cosmos Hub. Re-using the same app on a different chain is at the very least a privacy issue: different Bech32 address prefixes hide the fact that a user uses the same public key on multiple chains and can easily be linked across chains.

Just my 2cents. No idea how to weight those arguments.

webmaster128 commented 4 years ago

I was wondering, if all we need is a simple human readable text dump of the proto document, why would that be JSON? If you are curious, you can look into my Note on alternative text dumps of a proto document for Comsmos SDK signing. Since I'm not convinced with the result, I'm not inlining it here. See it as brainstorming or a way to challenge the status quo.

aaronc commented 4 years ago

Yeah I was starting to think the same thing.

If we want to optimize for human-readable, why not go all the way?

We could output YAML with Title Case fields:

From: cosmossdn248hsdgsdg
To: cosmossdgheg8hsdgoet3
Amount: 10atom

That would be pretty easy to parse and display on a ledger.

Or why not just embed ICU MessageFormat strings as extensions in the .proto files?

Something like:

Send {amount, coins} from {from, address} to {to, address}

would generate:

Send 10 atoms from cosmossdn248hsdgsdg to cosmossdgheg8hsdgoet3
zmanian commented 4 years ago

So I like this idea of having an embedded mark up language to replace JSON as a tx encoding language, this is generally something I've been interesting for several years.. It might even be something we could standardize with other protocols.

I'm also saying that if we have to get every custodial solution to do a major upgrade of their software and push major change to Ledger app through their process before the Cosmos Hub upgrades. The protobuf changes will have to be torn out and put in post IBC upgrade in 2021.

There will either be a JSON signing mechanism for protobuf or there won't be an upgrade of the Cosmos Hub to protobuf anytime soon.

zmanian commented 4 years ago

The hard requirement is the next protobuf upgrade must have the smallest possible breaking changes in signing.

We can plan future improvements that make a large scale change but expect something like 1 year of planning.

aaronc commented 4 years ago

@zmanian I don't think anyone is arguing that we disable amino signing until the ecosystem is ready. At least I haven't been. There should be a legacy signing mode and /tx/encode should convert an amino tx to a protobuf tx. We deprecate later maybe after a year.

I do think we should enable a pure protobuf solution side-by-side as soon as protobuf goes live.

aaronc commented 4 years ago

Here's what I propose as a solution. We create a signature structure like this:

message Signature {
    PubKey pub_key = 1;
    bytes signature = 2;
    SignMode mode = 3;
}

enum {
    BASIC = 0;
    LEGACY = 1;
    EXTENDED = 2;
}

BASIC is approach (3) just signing the raw proto bytes that get passed in the Anys. It's the easiest to implement and the only solution which has an almost zero transaction malleability surface. This unblocks developers who want to use pure proto from using pure proto.

LEGACY is amino JSON for compatibility as described above.

EXTENDED is a human-readable encoding that may actually get fleshed out later. One thing I want to note about this human-readable encoding is that the more involved it gets (say MessageFormat), the more transaction malleability could be an issue (I can think of several issues right away) and the more auditing/static linting that will be needed. But there is a pretty easy solution, just always concat the raw bytes from the BASIC approach (3) with the human readable text in the SignDoc. Then I don't think malleability is an issue no matter how complex the human readable format is.

So SignDoc could look like:

message SignDoc {
    Any msgs = 1;
    Fee fee = 2;
    string memo = 3;
    string chain_id = 4;
    uint64 account_number = 5;
    uint64 sequence = 6;
    string human_readable_text = 7;
}

So the human_readable_text just gets optionally appended to the SignDoc when using EXTENDED mode. Then the human readable text could be anything. We could support multiple formats. Maybe we use MessageFormat and support translations to 6 languages! I don't know...

But this human readable solution can come later. We start with the easy straightforward approach that satisifies compatibility (Amino) and pure proto.

How does that sound?

zmanian commented 4 years ago

This sounds promising.

What I am hoping is that we can build a language where you can take a byte stream + a markup stream and then securely display the contents of the bytes stream without full deserialization.

Probably you need to commit to the markup stream during signing and verifying but this should be great.

iramiller commented 4 years ago

I have been an advocate of the raw byte any message and signing approach for many reasons (concise, secure, etc) which I have tried to express in comments here and in other places. For the uses I have and foresee it seems to be the most elegant and straightforward approach.

The signature enum mentioned above seems perfect for meeting the migration need and preserving the ability to explore other signature methods that may be available in the future. Discipline will be required to ensure this flexibility does not lead to retention of options that are poor choices for security in the interest of backwards compatibility. That is a sad crypto story has been played out many times with tragic consequences.

aaronc commented 4 years ago

That is a sad crypto story has been played out many times with tragic consequences.

@iramiller could you say more about what you mean here?

tarcieri commented 4 years ago

@aaronc https://en.wikipedia.org/wiki/Downgrade_attack

aaronc commented 4 years ago

Glad to see there's some alignment around moving forward with this proposal. I think it unblocks the process at the present moment and leaves the door open for a more ideal alternative.

I am planning to start writing this up as an update to ADR 020 and will share that for comments when it's ready.


If we are going to move forward with a human-readable format, I do want that to be a project that comes to fruition sometime in the next few months (not years) after the initial .proto release. In the interest of being concrete, are either of the options I presented - YAML and MessageFormat - in the rough ballpark of the human readable format we want? @zmanian ? Do we want something human-readable, but structured like YAML? A human readable sentence/statement like MessageFormat? Both?

Also one small tweak to the SignDocabove - we don't want human_readable_text as a field on it. That will force embedded devices to still parse protobuf! Instead we can just concatenate the human readable text and protobuf binary and use length-prefixing or null-terminated strings so that parsing out the text part is trivial.

tarcieri commented 4 years ago

Another option I am somewhat loathe to suggest due to their long history of never gaining traction is canonical s-expressions (a.k.a. csexps).

They do fit the bill of being a "human readable" (for certain definitions of that phrase) format with a relatively straightforward encoding which is both truly canonical and "binary safe" in ways JSON is not.

For these sorts of applications they're perhaps most notable for their usage in SPKI/SDSI.

webmaster128 commented 4 years ago

The SignMode probably makes sense in general.

I assume the only reason to use Amino JSON would be to keep an 100 % compatible signing solution with existing signers. They'd need to be unaware of the existence of the signing mode, which works with the Signature layout from above. This would require a few things to be specified:

Would this also require composing new transactions in Amino JSON and as a consequence a conversion back to protobuf?

If Amino JSON signing existed, is there still a need for a second kind of signable text dump format (called EXTENDED above)?

aaronc commented 4 years ago

This would require a few things to be specified:

  • A map from Any type URLs to Amino types
  • A list of custom proto types with explicit JSON representations (like e.g. addresses that result in bech32; what about timestamps?)
  • All the nullability rules
  • A model to byte spec like https://gibson042.github.io/canonicaljson-spec/ for Amino
  • ... (probably more – I never really worked with Amino)

All of this is basically specified in the existing Amino go implementation and clients have as far as I know manually implemented each type they support in JS.

Would this also require composing new transactions in Amino JSON and as a consequence a conversion back to protobuf?

Yes, we would need to implement it at the /tx/encode endpoint to support existing clients. Because all proto types would also support amino encoding, it's mainly a matter of copying the data from the existing tx to the new tx. It shouldn't be super involved.

If Amino JSON signing existed, is there still a need for a second kind of signable text dump format (called EXTENDED above)?

I really don't think Amino JSON is a good long term solution. As you have outlined above, there are a lot of things needed for Amino JSON that aren't defined in the .proto files or that differ significantly (like type URLs). If we are supporting a signable human readable text format, I want it to be derived directly from information in the .proto files - we can use extensions if needed but it shouldn't be a different format. Also, it seems like we've already established that JSON isn't the best if we actually want something human readable. But if it is JSON, please some variation of proto JSON.

webmaster128 commented 4 years ago

All of this is basically specified in the existing Amino go implementation and clients have as far as I know manually implemented each type they support in JS.

hmm, [censored thoughts about Amino]. So in this scenario you cannot convert a proto document to a Amino JSON representation without converting it back to Go structs as an intermediate step?

If that's the case than we have a clear differentiation for the third sign mode type: a textual representation of a proto document, not a Go struct.

zmanian commented 4 years ago

the property we want to enforce is that the verifier should be checking that any textual representation of the protobuf matches the data and semantics of the protobuf.

aaronc commented 4 years ago

hmm, [censored thoughts about Amino]. So in this scenario you cannot convert a proto document to a Amino JSON representation without converting it back to Go structs as an intermediate step?

Yes, without a lot of infrastructure which doesn't exist

webmaster128 commented 4 years ago

Wouldn't it be a missed opportunity to not drop Amino signing now, at a point where a massively breaking update is coming anyways? Keeping Amino signing makes the protobuf migration fragile, as the standard tooling (proto blob + .proto schema definition + maybe custom serializers for certain well known types) is not sufficient to verify a signature. We're leaving the land of specifications and cross-language support again and depend on gogoproto's casttype annotation, the Go types like e.g. AccAddress and a custom JSON serialization for each type. There is just no in-spec way to encode a bytes field as bech32 without having a specific type. The resulting signature does not sign a proto document, but a Go structure. Maintaining this infrastructure is a heavy, and excludes signature verifiers other than the reference implementation. My gut feeling is it also shifts the development focus in the wrong direction: what's a "message"? A Go struct or a proto document?

This seems to be turning into a political challenge rather than a technical one. On the one side, there is the ATOM holders and their existing signing solutions (side note: not a single private or public key would change when the signing solution changes). On the other side there are application specific blockchain creators waiting to unleash the potential of protobuf message types. And somewhere in-between there is IBC. That's at least what I see. I cannot contribute much to those challenges as I'm neither in charge of nor aware of any kind of development or migration roadmap. But it would probably make sense to take a step back and review the requirements.

aaronc commented 4 years ago

As l understand it, not supporting amino json as an option (not the default) would delay protobuf by maybe a year. Verification other than the reference implementation is already not possible. We're trying to move there as soon as we can. Zones also do not need to choose to support amino signing. I think we should move forward, however, with a conversation about what to replace amino json with that's human readable so that the ledger, etc can migrate sooner rather than later and other signature verification solutions can work.

ethanfrey commented 4 years ago

@webmaster128

On the one side, there is the ATOM holders and their existing signing solutions (side note: not a single private or public key would change when the signing solution changes).

@aaronc

As l understand it, not supporting amino json as an option (not the default) would delay protobuf by maybe a year.

The only arguments that it is impossible to leave amino json have come from @zmanian. It would be nice to get more evidence/voices on this one. I agree that this is breaking to the ledger app, but also possible to write a ledger app that does handle other proposed signing formats. I doubt however, that changing the ledger app will take a year if the work is funded and high priority (eg. ICF funds it, or Cosmos Hub governance).

The fundamental missing info is: Besides the ledger app and some existing go and js code, what else signs cosmos-sdk transactions? And what are their requirements? This gives us an objective point to analyze the "delay by 1 year" part. I do agree it will cause more work (ledger rewrite), but if that starts in the next few weeks, I believe this would be ready before the protobuf/ibc/etc work is audited and ready to go on the cosmos hub.

However if there are any other custom solutions, that does change the equation.

From what I have inferred from various chats, a number of validators have custom (or shared?) signing solutions based on YubiKey. But AFAIK, those are signing the tendermint block headers, which is a different discussion - we don't have to change that signing behavior when changing the Cosmos-SDK transaction signing.

I think the enum solution is a workable solution if such a large scale migration really is impossible in a realistic timeframe, but I would love to understand why.

zmanian commented 4 years ago

Here is the size of the ecosystem that will be broken by changing the signing.

  1. Changes to ledger app have typically taken months. We would also have to create a new contract through the ICF for Juan to do this work. We have been iterating on changes to get out of developer mode for more than 6 months. Changes to the signing system will reset all of that work.

  2. There are more than 50 exchanges and custodians that support the Cosmos hub. Almost everyone has a unique implementation and integration between their custodial systems. Some of these implementations run within the secure environments of their HSMs. I've seen implementations in Java, Rust, Go, C++ and C of the signing implementation.

  3. There are at least a dozen non-custodial wallets that support that Cosmos Hub.

A coordinated changes across a 100 different software system will take months to a year to execute.

The people who maintain these pieces of software aren't going to show up in this issue and explain how breaking the signature system will affect them but they are by far some the biggest stakeholders in these changes.