onflow / atree

Atree provides scalable arrays and scalable ordered maps.
https://onflow.org
Apache License 2.0
39 stars 16 forks source link

Upgrade or replace BLAKE3 package which is returning nonstandard BlockSize() #242

Closed fxamacker closed 2 years ago

fxamacker commented 2 years ago

Issue To Be Solved

IMPORTANT: Atree doesn't use blake3.BlockSize() or use BLAKE3 with HMAC or PBKDF2, etc., but we should evaluate replacing zeebo/blake3 with lukechampine/blake3.

zeebo/blake3 v0.2.1 (and all earlier versions) returns 8192 from BlockSize() (in api.go). However, BLAKE3 specification uses 64 (not 8192) bytes as the block size.

In api.go in zeebo/blake3 (all versions 0.2.1 and earlier).

// BlockSize implements part of the hash.Hash interface. It returns the most
// natural size to write to the Hasher.
func (h *Hasher) BlockSize() int {
    // TODO: is there a downside to picking this large size?
    return 8192
}

In zeebo/blake3 (api.go) v0.2.2 (Jan 27, 2022), BlockSize() correctly returns 64, BlockSize() isn't called by zeebo/blake3 itself, and none of the other hardcoded occurrences of 8192 in the code were changed to 64.

// BlockSize implements part of the hash.Hash interface. It returns the most
// natural size to write to the Hasher.
func (h *Hasher) BlockSize() int {
    return 64
}

BLAKE3 Package Selection

Suggested Solution

Two possible solutions:

Some reasons make me lean toward replacing zeebo/blake3 with lukechampine/blake3:

The default answer should be "Yes, there are probably unanticipated downsides" deviating from a cryptographic standard.

fxamacker commented 2 years ago

Unfortunately, the alternative to zeebo/blake3 is much slower for short inputs and is hosted on a personal domain. So we'll just upgrade to latest zeebo/blake3 instead of switching.

Also, PR #248 testing 131107 digests for compatibility with BLAKE3 reduces risks.