Open csuhta opened 7 years ago
+1... It feels to me like guetzli could do a better job if it has a larger source image to work with. If you have to resize before guetzli compression, there's less data to work with and not as good a result.
sounds like a reasonable approach to me
the question is: If you take a jpg 10x10 and resize it to a TIFF/ PNG 5x5 and then compress using guetzli
or put a 10x10 jpg into guetzli and get out a 5x5 jpg
can guetzli actually do a better job in the second case?
or as a second variant of that question, if you use a TIFF/ PNG for input and let guetzli work on the higher data density and output a lower density, can guetzli make any use of that?
@monouser7dig first and second case should result in the same thing, no? TIFF/PNG is lossless so whether there is a png step in-between or guetzli makes a bitmap or some other lossless form to resize doesn't matter. The only issue here is involving another tool (TIFF/PNG) and the extra time to convert and reconvert all over again.
This would be a good feature considering you'd have to resize images often, e.g. if you're uploading a photo to your website you're not likely to store say a 10MB+ original.
it does matter because if it's fed with a higher resolution image it might be able to do better optimisations
+1
Web authors or content-management systems often need to export multiple sizes of an image for different UI or devices.
Based on the documentation and some quick tests I did, it seems like the ideal workflow that involves Guetzli would be:
Images made in that middle step are essentially discarded/overhead. You also need to use another library to do it, and try to preserve PNG quality as you go.
Is a feature that would resize the source image while processing in scope for this CLI?
An big potential reason to avoid this is that people start asking for even more this-could-be-ImageMagick features.