onflow / cadence

Cadence, the resource-oriented smart contract programming language πŸƒβ€β™‚οΈ
https://cadence-lang.org
Apache License 2.0
533 stars 138 forks source link

Cadence Compact Format (CCF) Specification RC and CCF Codec (fully self-describing mode) #2157

Closed fxamacker closed 1 year ago

fxamacker commented 1 year ago

Problem

Currently, Cadence external values (such as events) are encoded using JSON-Cadence Data Interchange Format. The JSON-based format is a human-readable text encoding that is verbose, inefficient, and doesn't define deterministic encoding (canonical format).

Proposed Solution

Design and specify Cadence Compact Format (CCF) as a more compact, efficient, and deterministic alternative to the JSON-based format.

We can leverage an existing data format's data model to reduce complexity, cost, and risks to CCF specs and codecs.

Additionally, we can design and implement a CCF codec by using an existing codec (e.g. the same way COSE codec uses CBOR codec under the hood).

For more info about CCF requirements or design considerations, see

Scope

CCF Specifications only specifies Cadence Compact Format.

It is outside the scope of CCF Specifications to specify individual CCF-based formats or protocols (e.g. events).

Introduction

CCF is a data format that allows compact, efficient, and deterministic encoding of Cadence external values.

Cadence external values (e.g. events, transaction arguments, etc.) have been encoded using JSON-CDC, which is inefficient, verbose, and doesn't define deterministic encoding.

The same FeesDeducted event on the Flow blockchain can encode to:

CCF defines all requirements for deterministic encoding (sort orders, smallest encoded forms, and Cadence-specific requirements) to allow CCF codecs implemented in different programming languages to produce the same deterministic encodings.

Some requirements (such as "Deterministic CCF Encoding Requirements") are defined as optional. Each CCF-based format or protocol can have its specification state how CCF options are used. This allows each protocol to balance tradeoffs such as compatibility, determinism, speed, encoded data size, etc.

CCF uses CBOR and is designed to allow efficient detection and rejection of malformed messages without creating Cadence objects. This allows more costly checks for validity, etc. to be performed only on well-formed messages.

CBOR is an Internet Standard defined by IETF STD 94. CBOR is designed to be relevant for decades and is used by data formats and protocols such as W3C WebAuthn, C-DNS (IETF RFC 8618), COSE (IETF STD 96), CWT (IETF RFC 8392), etc.

Preliminary Benchmark Comparisons (obsolete)

We are not comparing apples to apples. Prior formats (CBF and JSON-Cadence Data Interchange) didn't specify requirements for validity, sorting, etc.

At this time, CCF decoder doesn't include the option to check for "Preferred Serialization" (encoding to smallest size).

These informal and preliminary benchmarks used CCF in fully self-describing mode (see size comparisons above).

$ benchstat bench_json_events_48k.log bench_ccf_events_48k.log 
goos: linux
goarch: amd64
pkg: github.com/onflow/cadence/encoding/ccf
cpu: 13th Gen Intel(R) Core(TM) i5-13600K
                     β”‚ bench_json_events_48k.log β”‚      bench_ccf_events_48k.log       β”‚
                     β”‚          sec/op           β”‚   sec/op     vs base                β”‚
EncodeBatchEvents-20                 96.61m Β± 4%   70.73m Β± 3%  -26.79% (p=0.000 n=10)
DecodeBatchEvents-20                 647.7m Β± 3%   157.5m Β± 3%  -75.68% (p=0.000 n=10)
geomean                              250.1m        105.5m       -57.81%

                     β”‚ bench_json_events_48k.log β”‚       bench_ccf_events_48k.log       β”‚
                     β”‚           B/op            β”‚     B/op      vs base                β”‚
EncodeBatchEvents-20                32.45Mi Β± 0%   25.82Mi Β± 0%  -20.45% (p=0.000 n=10)
DecodeBatchEvents-20               234.97Mi Β± 0%   56.16Mi Β± 0%  -76.10% (p=0.000 n=10)
geomean                             87.32Mi        38.08Mi       -56.39%

                     β”‚ bench_json_events_48k.log β”‚      bench_ccf_events_48k.log       β”‚
                     β”‚         allocs/op         β”‚  allocs/op   vs base                β”‚
EncodeBatchEvents-20                 756.6k Β± 0%   370.4k Β± 0%  -51.05% (p=0.000 n=10)
DecodeBatchEvents-20                 4.746M Β± 0%   1.288M Β± 0%  -72.86% (p=0.000 n=10)
geomean                              1.895M        690.7k       -63.55%

NOTE: These benchmarks used JSON-CDC (298 byte msg) and
CCF in fully self-describing mode (118 byte msg),
instead of CCF in partially self-describing mode (20 byte msg).

TODO

Next steps, such as integration and use of CCF for events, transaction arguments, etc. are outside the scope of this epic.

Edits

bluesign commented 1 year ago

is this related to previous work done in this area? or is it started from scratch?

PS: notion doc is not accessible ( initial requirements )

fxamacker commented 1 year ago

EDIT: clarify answer to remove ambiguity, renamed CCF "design" section to "objectives" and replace wall of text with CCF Abstract

Hi @bluesign

is this related to previous work done in this area? or is it started from scratch?

The requirements/objectives are related to previous work but design & implementation is from scratch:

Some reasons for CCF replacing previous work (the proposed Cadence Binary Format) are mentioned in https://github.com/onflow/flow-go/issues/3448. E.g., it didn't have all required features (type and data were not separate, so redundant Cadence type info couldn't be eliminated from messages) and it didn't leverage existing data formats.

CCF will close all 15 open issues in onflow/cadence related to Cadence Binary Format (CBF).

BTW, I just updated text of this epic to add Preliminary Size and Benchmark Comparisons section. CCF's preliminary encoding size and benchmark comparisons are at https://github.com/onflow/flow-go/issues/3593.

notion doc is not accessible ( initial requirements )

The initial requirements in notion is captured by the Objectives section of CCF Specifications, which is a superset of initial requirements doc. For example, it adds early detection of malformed data (so CCF codecs don't need to create Cadence objects when attempting to decode malformed data).

See Draft CCF Specification for more details (superset of initial requirements from notion is listed in the Objectives section). For convenience, here's the abstract:

Abstract

Cadence Compact Format (CCF) is a data format designed for compact, efficient, and deterministic encoding of Cadence external values.

Cadence is a resource-oriented programming language that introduces new features to smart contract programming. It's used by Flow blockchain and has a syntax inspired by Swift, Kotlin, and Rust. Its use of resource types maps well to the Move language.

CCF can be used as a hybrid data format. CCF-based messages can be fully self-describing or partially self-describing. Both are more compact than JSON-based messages. CCF-based protocols can send Cadence metadata just once for all messages of that type. Malformed data can be detected without Cadence metadata and without creating Cadence objects.

bluesign commented 1 year ago

thanks @fxamacker, I read in detail, it is really well designed and solid.

fxamacker commented 1 year ago

Thanks @bluesign for taking a look at the draft specs. I updated my reply (it's still long but less so). :smile:

More detailed requirements were added to specs and PR #2364 has the CCF codec implementing CCF Specification (RC1). PR #2364 has more up-to-date info, including size & benchmark comparisons using 48,000+ events from a mainnet transaction.

bluesign commented 1 year ago

thanks @fxamacker I just checked the benchmarks, but I feel CCF should be a lot faster. Plugging something like go-json etc, would easily make them similar encoding performances. ( which in those events probably 30-40% time is spent on encoding UFix64 to String for json )

fxamacker commented 1 year ago

I just checked the benchmarks, but I feel CCF should be a lot faster. Plugging something like go-json etc, would easily make them similar encoding performances.

Hi @bluesign,

Yes, I also think CCF codec can be faster (it will be :smile:). There are tradeoffs to get deterministic encoding, compact size, etc. For the first implementation, I'm focused on correctness because CCF is already faster, uses less memory, and produces smaller encoded size than JSON.

There are possible optimizations to increase encoding performance. I want to check use cases with Cadence and FVM team before implementing them. Some optimizations may be at CCF level, some at CCF-based protocol level. We'll see.

Apples to Apples Speed Comparisons

Speed comparisons need to account for extra requirements such as deterministic encoding, extra data validation, and smaller encoded data size. JSON-Cadence Data Interchange (JSON-CDC) doesn't sort and isn't deterministic.

As noted earlier, complying with extra requirements such as determinism, sorting, compact size, etc. requires extra processing and memory but CCF is still faster, uses less memory, and produces smaller encoded size.

After fuzzing is in place, CCF codec and CCF-based protocols can be optimized for even faster speed if needed. :smile:

bluesign commented 1 year ago

yeah I meant in a good way that there is a big buffer for optimisations. :)

JSON-Cadence Data Interchange (JSON-CDC) doesn't sort and isn't deterministic.

how this can be possible? I don't believe JSON-CDC (encoder) can be non-deterministic, can you explain a bit on that?

CCF decoder verifies event data is sorted, etc. CCF encoder avoids encoding redundant Cadence type information (e.g. not repeatedly encoding same Cadence type for every element of array).

this shouldn't be a performance bottleneck, as it is not sorting. second one is benefit to performance.

CCF encoder sorts data to encode event data deterministically.

sort here also confused me too ( but probably related to first question )

Btw: Btw I really love CCF, my comments usually sound a bit grumpy or criticising as English is my second language, I am just trying to understand it deeper.

fxamacker commented 1 year ago

Hi @bluesign

my comments usually sound a bit grumpy or criticising as English is my second language

Your English is excellent and didn't sound negative at all! πŸ‘

this shouldn't be a performance bottleneck, as it is not sorting. second one is benefit to performance.

Yes, although this isn't a bottleneck for decoder, the encoder does extra work to avoid encoding redundant Cadence type information and it is a tradeoff for encoding less data.

how this can be possible? I don't believe JSON-CDC (encoder) can be non-deterministic, can you explain a bit on that?

I agree with @turbolent's comment last year in issue #2165, that "The JSON encoding is not deterministic".

The JSON-CDC Specifications doesn't mention "sort". That is an indication that it doesn't fully specify how to encode deterministically (e.g. how encoders must sort dictionaries).

JSON-CDC encoder doesn't sort dictionary elements or composite fields. For ordering these, the encoder relies on implementation detail of Cadence (outside of the encoder).

For these reasons, we cannot claim JSON-CDC encoding is deterministic.

fxamacker commented 1 year ago

Closing this.

Future changes to codec should be fuzzed before being merged.