Open alecazam opened 4 weeks ago
This isn't always a generic decompress where the decompressed size is totally unknown. This is decoding known rows of known size, so the memory is allocated ahead of time. So the decompress should hand down the buffer to fill in.
I also couldn't get miniz to work either with this api.
This does everything I'd expect for custom_zib, but is commented out by LODEPNG_NO_COMPILE_ZLIB. So I guess I'll leave in zlib by not setting that, and still supply the custom_zlib impl.
/*expected_size is expected output size, to avoid intermediate allocations. Set to 0 if not known. */
static unsigned zlib_decompress(unsigned char** out, size_t* outsize, size_t expected_size,
const unsigned char* in, size_t insize, const LodePNGDecompressSettings* settings) {
unsigned error;
if(settings->custom_zlib) {
ucvector v = ucvector_init(*out, *outsize);
if(expected_size) {
/*reserve the memory to avoid intermediate reallocations*/
ucvector_resize(&v, expected_size);
v.size = expected_size;
}
// this only happens on iccp block
if (*outsize == 0 && expected_size == 0) {
expected_size = 16*1024;
ucvector_resize(&v, expected_size);
}
error = settings->custom_zlib(&v.data, &v.size, in, insize, settings);
if(error) {
/*the custom zlib is allowed to have its own error codes, however, we translate it to code 110*/
error = 110;
/*if there's a max output size, and the custom zlib returned error, then indicate that error instead*/
if(settings->max_output_size && *outsize > settings->max_output_size) error = 109;
}
*out = v.data;
*outsize = v.size;
}
Failures on iccp blocks (which passes 16K for size, even though ony 4506 bytes are decompressed). So then my decompressor returns an error. Not sure why this doesn't fail with the default zlib_decompress.
I did get libCompression working by +2 on ptr, and -2 on size. But then iccp block broke all this. So just went back to default decompress in lodepng.
https://github.com/alecazam/kram/blob/main/libkram/kram/Kram.cpp
// wrap miniz decompress, since it ignores crc checksum and is faster than default png
unsigned LodepngDecompressUsingMiniz(
unsigned char** dstData, size_t* dstDataSize,
const unsigned char* srcData, size_t srcDataSize,
const LodePNGDecompressSettings* settings)
{
// mz_ulong doesn't line up with size_t on Windows, but does on macOS
KASSERT(*dstDataSize != 0);
#if USE_LIBCOMPRESSION
// this returns 121 dstSize instead of 16448 on 126 srcSize.
// Open src dir to see this. Have to advance by 2 to fix this.
if (srcDataSize <= 2) {
return MZ_DATA_ERROR;
}
char scratchBuffer[compression_decode_scratch_buffer_size(COMPRESSION_ZLIB)];
size_t bytesDecoded = compression_decode_buffer(
(uint8_t*)*dstData, *dstDataSize,
(const uint8_t*)srcData + 2, srcDataSize - 2,
scratchBuffer, // scratch-buffer that could speed up to pass
COMPRESSION_ZLIB);
int result = MZ_OK;
if (bytesDecoded != *dstDataSize) {
result = MZ_DATA_ERROR;
*dstDataSize = 0;
}
#else
// This works.
mz_ulong bytesDecoded = *dstDataSize;
int result = mz_uncompress(*dstData, &bytesDecoded,
srcData, srcDataSize);
if (result != MZ_OK || bytesDecoded != *dstDataSize) {
*dstDataSize = 0;
}
else {
*dstDataSize = bytesDecoded;
}
#endif
return result;
}
I'm trying to override the deflate algorithm in libpng by defining custom_zlib. I have a this using libCompression on macOS. But it needs the buffer allocated and size to pass to the decompression. libpng has expected_size, but doesn't pass this down to custom_zlib.
Even the default impls allocate a vector, and then call deflatev on it.
This receives a nullptr for dstData, and 0 for dstDataSize. So then the callback can't call the decompression. Can't pass nullptr, and can't pass 0 to the libCompression code.
Here's where expected_size exists.
And here is where expected_size is dropped, and outData isn't allocated. lodepng already knows what the size is supposed to be, and can allocate the buffer, and have this fill in each row of the png.