Open abhibaruah opened 1 year ago
Try turning off the filtering.
@ctruta; no response in a month. I'm guessing they realized that there is a lot of rope to play with here. It's not a bug. It might be worth having a "Discussions" tab, e.g. see https://github.com/NightscoutFoundation/xDrip; it's then possible to pretty much bounce questions like this, feature requests, etc, to the Discussions tab.
Dead bug.
I did some experiments and reducing compression level gives the most predictable performance / file size tradeoff: https://libspng.org/docs/encode/#performance (these are docs for my PNG library but the encoder defaults are all the same and it applies to most PNG libraries).
Using zlib-ng instead of zlib also helps.
@randy408 I found much the same thing for compression level. Mark specifically mentions Z_RLE as useful for PNG and at least one other non-PNG format claims compression better than PNG with just RLE (no filtering IRC). Results of filtering are highly dependent on test set. CGI graphics arts images do really well with SUB...
My results (reading this from my own version of libpng1.7 :-)
High speed: Z_RLE (as fast as HUFFMAN_ONLY can can reduce size a lot in a few cases).
IDAT and iCCP: Z_DEFAULT_STRATEGY, otherwise (iTXt, zTXt) Z_FILTERED
Then set level based on the strategy choice: Z_RLE and Z_HUFFMAN level := 1
Z_FIXED, Z_FILTERED, Z_DEFAULT_STRATEGY as follows:
HIgh speed: level := 1
'low' compression: level := 3
'medium' compression: level := 6 (the zlib default)
'high' plus 'low memory' (on read) or 'high read speed' : level := 9
The 'windowBits' is set based on strategy/level/user request plus a consideration of the size of the uncompressed data; it's rarely worth setting it to cover the whole data except with Z_FILTERED at a level >= 4 (determined by experiment.) and Z_FIXED (by logical argument.)
Sometimes a lower windowBits increases compression :-)
Choice of filter should be based on formatxbit_depthxwidth. In general I found NONE (i.e. no filtering) to be recommended if the actual number of bytes in the row (excluding the filter byte) is less than 256. Palette (all cases) and grayscale <8bit-per-pixel as well as 16-bit gray+alpha always ended up better with NONE.
I define 'fast' filters as NONE+SUB+UP; these are fast in both encode and decode. NONE+SUB is particularly good for the reader because it avoids the need to access the previous row. Anyway the filter selection I ended up with is:
Except that if the implementation is not able to buffer (e.g. to handle writing very large images in low memory) the NONE filter is forced.
Overall I'm firmly of the opinion that the app should specify the aim such as fast write or fast read and the implementation should work out the best way of doing it because the zlib settings are extensive and the result depends on all of the settings at once. There is too much rope and no general solution.
Getting a decent test set is particularly tricky: I have a test set of around 125000 PNG files that I spidered off the web over 10 years ago. It's primarily smaller images (<256x256) so it's probably still representative of web page icons etc. Larger images are difficult; the image database used for AI is primarily photographic and not really applicable to PNG. My spidering showed that a lot of the very large images were graphics arts productions, particularly posters. These have low noise and low colour count whereas CG images (ray tracing) have high noise but often a fairly restricted gamut. Knowing the nature of the image being compressed is probably as good a guide to the settings as anything but it's yet another variable!
@ctruta: this is a discussion not an issue. Don't know how you want to deal with that.
Hello all,
We have noticed that while writing PNG images using libpng with the default compression settings, the performance is really slow.
For comparison, using libtiff to write tiff files with the default deflate compression takes almost half the time to write the same image data while generating a file of almost the same size.