GoogleChrome / lighthouse

Automated auditing, performance metrics, and best practices for the web.
https://developer.chrome.com/docs/lighthouse/overview/
Apache License 2.0
28.16k stars 9.34k forks source link

byte-efficiency/uses-ideal-universal-image-format #7721

Open t-kelly opened 5 years ago

t-kelly commented 5 years ago

Feature request summary It's pretty common for websites to serve up an image in the non-ideal universal image format. For example, a PNG with no alpha channel (no transparency) could arguably be served as an optimized JPEG and receive considerable byte savings.

As @tomayac pointed out:

There might still be reasons where you would want the pixel fidelity of PNG without the need for transparency, but where PNG beats SVG and where only a fixed size is required. Maps like this are an example (running it through SVGO is lossy, check the Andaman and Nicobar Islands).

This could be considered an edge case, and it's up to the developer to judge whether or not the audit makes sense for their particular application.

We have a working prototype of this audit running in our own Lighthouse service using Sharp's hasAlpha metadata to detect the use of transparency -- however we're not sure how much this conflicts with Lighthouse contribution guideline of Not have[ing] a significant impact on our runtime performance or bundle size.

What is the motivation or use case for changing this? Shopify aims to automate image processing as much as possible for merchants. Some of our merchants are uploading PNGs without realizing the impact it's having on performance. It would be great if we could detect this situation and also get a byte savings estimate.

How is this beneficial to Lighthouse? We're getting saving estimates in the 10's of mb for some of our stores that include some massive PNGs. I'm sure this situation is not unique to Shopify.

patrickhulce commented 5 years ago

Thanks for filing @t-kelly I agree!

A related and much less well articulated and pre-template days issue that tracks this category of effort is https://github.com/GoogleChrome/lighthouse/issues/4334 :)

Unfortunately we wouldn't be able to use sharp in core since we must be able to run our audits in multiple environments where native node modules aren't available. We would need support in Chromium over the protocol in order to make this happen.

haeky commented 5 years ago

Unfortunately we wouldn't be able to use sharp in core since we must be able to run our audits in multiple environments where native node modules aren't available.

Even though sharp is not supported, I'd like to add another tangent to this feature request.

We've noticed that some of our merchants at Shopify uses gif that only contains 1 frame. As of the newest version of sharp, we can now detect how many pages there is for a gif (or any other multi-pages input).

Example: https://media.giphy.com/media/13gvXfEVlxQjDO/giphy.gif

{ 
  format: 'gif',
  width: 500,
  height: 36000,
  space: 'srgb',
  channels: 4,
  depth: 'uchar',
  isProgressive: false,
  pageHeight: 500,
  hasProfile: false,
  hasAlpha: true 
}
paulirish commented 4 years ago

We like this idea. We've gone down this direction before but found that getting the full response payload of images is too time consuming. So that's the big issue.. secondary issue is using a dependency like Sharp. We'd prefer not to do that, but that's sort of moot.

We discussed this a bit and our preferred solution would be do improve things at the protocol layer. We already have Audits.getEncodedResponse which we implemented and use in Lighthouse.

We are thinking that either we do a generalized getByteRangeOfResourceContent method or implement a very specific doesResponseHaveAlpha method.