Closed mcroomp closed 4 months ago
Hmm, in principle yes, the quantization table can be of (2^16-1), but Lepton does not support non-8-bit-depth JPEGs, and citing JPEG standard, sec. B.2.4.1 "Quantization table-specification syntax": "An 8-bit DCT-based process shall not use a 16-bit precision quantization table." So in principle standard-conforming JPEG able to be coded by Lepton should have 8-bit quantization table. Dequantization in DB Lepton, however, always done in 32 bits accuracy, and support for 16-bit tables was present there.
@mcroomp, you need to update cargo files, it fails in CI.
Hmm, in principle yes, the quantization table can be of (2^16-1), but Lepton does not support non-8-bit-depth JPEGs, and citing JPEG standard, sec. B.2.4.1 "Quantization table-specification syntax": "An 8-bit DCT-based process shall not use a 16-bit precision quantization table." So in principle standard-conforming JPEG able to be coded by Lepton should have 8-bit quantization table. Dequantization in DB Lepton, however, always done in 32 bits accuracy, and support for 16-bit tables was present there.
Yeah... the problem is that the corpus that we've already compressed with the C++ version might include weird non-standard conforming JPEGs. I'm thinking of adding an option to reject new JPEGs that overflow 16 bits on dequantize, since they create a headache for future compat.
Unfortunately Lepton doesn't check to make sure that coefficients * quantization table aren't greater than 16 bits. This can cause all kinds of weird behavior so we need to make sure that this behavior doesn't change.
Tests showed found a regression in #67 that count7x7 is too small by one and would cause an assertion decoding some images.
Explicitly import rand_chacha instead of using the standard library for generating random numbers. Since the standard library can change implementations, we want it to be fixed so that our hashes don't change.