GoogleChrome / lighthouse

Automated auditing, performance metrics, and best practices for the web.
https://developer.chrome.com/docs/lighthouse/overview/
Apache License 2.0
28.48k stars 9.39k forks source link

Resources with ETag caching are reported with full size by 'uses-long-cache-ttl' #9118

Open joonaojapalo opened 5 years ago

joonaojapalo commented 5 years ago

Provide the steps to reproduce

  1. Run LH on https://react-demo.docker-box.inpref.com/

What is the current behavior?

LH reports resource https://d2wzl9lnvjz3bh.cloudfront.net/frosmo.easy.js to have Cache TTL=None and Size (KB)=54 KB.

What is the expected behavior?

The resource (https://d2wzl9lnvjz3bh.cloudfront.net/frosmo.easy.js) is cached using the ETag mechanism. For the user of the report it would better to report Size (KB) of the resource as

to reflect the actual bandwidth usage more accurately.

Environment Information

patrickhulce commented 5 years ago

Thanks for filing @joonaojapalo! Great point, the size here is overstated for ETag resources 👍

patrickhulce commented 5 years ago

We'll defer this update until we add a "warm load" pass (#585) to Lighthouse that more appropriately belongs. Intermediate plan might be to surface the round trip request cost in addition to the transfer size.

paulirish commented 5 years ago

We definitely want to communicate that using ETags for any render-blocking/critical resources is a bad pattern (as the RTT will be very user-visible during a warm load). Those should definitely but more on the immutable side of caching.

ddotlic commented 3 years ago

We definitely want to communicate that using ETags for any render-blocking/critical resources is a bad pattern

@paulirish Could you please elaborate a bit on this? It's understood that a (cheap) HTTP request is still much slower than no request, but the reporting of the tool in this instance is somewhat, hm, disingenuous.

We have files hosted in Amazon S3 and served over CloudFront. They are both edge cached in Amazon's CDN as well as served with an etag. One of our clients looks exclusively a the lighthouse/pagespeed tool and doesn't care about our explanations. The tool insists that we use expires header or some such, giving us a fairly bad score.

Since the server(s) in question aren't completely under our control, I am not yet sure what can be done. If you have any suggestions (not WRT Amazon's infra, in general), we will be grateful.

Thanks for listening.

patrickhulce commented 3 years ago

Thanks for the ping @ddotlic!

Proposal for the team:

patrickhulce commented 3 years ago

We'll proceed with the above approach 👍

ref #11313