Open jeffkaufman opened 9 years ago
If we're feeling more ambitious (and some of Maks' image studies might provide the data we would need for this) we can look at scaling up the quality as we scale down the image, on the principle that there's a continuum of badness here (and that we'll tend to just do a lot of work that we throw away if we peg things at 85%).
This ties in to quality settings for responsive images – that is an untapped TODO on the responsive images filter that might provide big wins for bandwidth in practice.
Yes, ideally if we would say "this 200x200 image at quality N displayed at 100x100 looks like what you would get if you resized to 100x100 with quality M" and then use min(M, 85)
for the output quality. But I don't think in general there's a nice way to find which (quality, DPI)
tuples are equivalent?
A few years ago (2012?) people were talking about how you could sometimes get better looking images by increasing pixels while decreasing quality [1][2]. I vaguely remember Pat Meenan saying that if you do your experiments properly it's not actually an improvement, but I'm not finding that now. If we understood the tradeoffs around size vs quality level better it's possible that we could do some interesting things here.
[1] http://www.netvlies.nl/blog/design-interactie/retina-revolution [2] https://www.filamentgroup.com/lab/compressive-images.html
Yes, this is pretty much what the original reporter was doing on their site.
On Wed, Jul 22, 2015 at 2:53 PM, Jeff Kaufman notifications@github.com wrote:
A few years ago (2012?) people were talking about how you could sometimes get better looking images by increasing pixels while decreasing quality [1][2]. I vaguely remember Pat Meenan saying that if you do your experiments properly it's not actually an improvement, but I'm not finding that now. If we understood the tradeoffs around size vs quality level better it's possible that we could do some interesting things here.
[1] http://www.netvlies.nl/blog/design-interactie/retina-revolution [2] https://www.filamentgroup.com/lab/compressive-images.html
— Reply to this email directly or view it on GitHub https://github.com/pagespeed/mod_pagespeed/issues/1112#issuecomment-123826325 .
After talking to @huibaolin I no longer think "stop using input quality to determine output quality on resize" is right in all cases. Most of the resizing we do currently is by a smaller amount, like 200x200 to 195x195, and in those cases if the input is q=75 we don't want to be saying that at 195x195 we require q=85. Ideally here we would consult a table and determine that the correct output quality is, say, q=76 for 195x195 and q=82 for 150x150.
With the current architecture of image.cc
, making a quality decision based on resize ratio is pretty awkward.
After talking to @huibaolin, I don't think it makes sense to work on this right now. While this optimization that we interact badly with was something that some people advocated for performance, it never caught on especially and is still very rare. Fixing it would make our image resizing code more complex for very small benefit, and there are bigger things to work on for now.
Scenario:
The main difference between what the browser is doing and what pagespeed is doing is that on the resize pagespeed is setting
output quality
tomin(ImageRecompressionQuality, input quality)
while the browser is settingoutput quality
to∞
. To fix this, pagespeed should setoutput quality
on resize toImageRecompressionQuality
, ignoringinput quality
, and then as usual only accept the resize if we made the image take fewer bytes.Mailing list discussion prompting this: https://groups.google.com/forum/#!topic/mod-pagespeed-discuss/abtCwJ08gdY