Open connorjclark opened 4 years ago
Open question: how much effort do we want to place in reducing this? We know that reducing the bundle size will shorten how long "Lighthouse is warming up ..." takes. May it possibly reduce the number of failures in downloading the bundle?
how much effort do we want to place in reducing this?
Historically, the biggest jumps in size have come from some external goal imposed on Lighthouse that didn't have much control over and the level of engineering investment to overcome the increase is prohibitively expensive or infeasible (I'm thinking of SEO audits with additional dependencies we can't fork and maintain ourselves, axe upgrades, etc). The effort we tend to have more control over is cleaning up other areas where there are easier wins like the ones you've identified. IMO, the biggest reason to follow up on these is following our own advice and not being hypocritical but the situation is also pretty different from page load, and we're unlikely to ever be small enough we can be proud of how small we are, so 🤷♂ . In summary, I wouldn't personally prioritize this much more than what you've already identified.
May it possibly reduce the number of failures in downloading the bundle?
Hypothesis: This error rate will not meaningfully change regardless of our bundle size reduction efforts.
Common failure modes we've seen before are typically because of...
If that was our primary goal, a better error message there to explain these scenarios would be far more impactful :)
3p web is big json - could a more compact json format be done?
More compact JSON is a decent idea, though I'm not sure how much more than buys us after gzip. As far as actual data goes this was already the most stripped down version I could come up with from the complete dataset (complete data is a few MB), so I think that may be the only win left there.
Agree with all the above, easy wins without painful solutions are the best, and it seems like libraries we depend on are usually the easy wins.
I'll add the caveat that I mentioned in chat last night that in one corner of my house the wifi signal is really bad and it is really annoying to start devtools lighthouse in Canary (where it seems like it's almost always redownloading the remote module). So even if it's not failing, it is a really poor experience :)
More compact JSON is a decent idea, though I'm not sure how much more than buys us after gzip.
It would be interesting to see if anyone has done bundle treemaps using gzipped size, though it gets error-bar-y because it would depend on where a particular module gets ordered into a bundle, the content that comes immediately before it and after it, etc.
It would be interesting to see if anyone has done bundle treemaps using gzipped size, though it gets error-bar-y because it would depend on where a particular module gets ordered into a bundle, the content that comes immediately before it and after it, etc.
source-map-explorer
does support this. But, they seem to gzip each mapping separately, and attribute the gzip result to a specific module. That seems like it'd lose out on any gzip compression. impl: https://github.com/danvk/source-map-explorer/blob/f408a40a46a07f01b7b10c7a49ca19133bfe2c40/src/explore.ts#L273
But, they seem to gzip each mapping separately, and attribute the gzip result to a specific module
maybe it's not that significant a difference in practice? Or maybe it's enough of a pain to test no one has bothered :)
oh and this PR reduces the bundle by 20% https://github.com/GoogleChrome/lighthouse/pull/9605
We can consider building axe ourselves to remove some big deps (like axios). Can do much like how CDT SourceMap.js is done. Should wait for https://github.com/dequelabs/axe-core/issues/1935 to be fixed (slotted for next axe release 4.1), as that will provide an entry without any polyfills, which will likely change how we might do an ad-hoc custom build.
Some additional data and exploration in this doc: https://docs.google.com/document/d/15DyId8C9bGnk1ZpgaIekYG_RDEC_KYNQW3Tsp6vs11Q/edit#
I am working on trying to run TypeScript on the front_end/lighthouse
files in DevTools, but I am running into several problems. Most prominently, LightHouse puts several types on the global scope, such as ReportRenderer
and ReportUIFeatures
(see the declarations in https://source.chromium.org/chromium/chromium/src/+/master:third_party/devtools-frontend/src/front_end/lighthouse/LighthouseReporterTypes.js;l=28-30;drc=1e10f8546c3cd94438e2046711e302a9dc771394)
To make consumption easier for DevTools and potentially improve tree-shaking and all, could we consider changing the way LightHouse exports symbols to use ES exports instead? E.g. rather than putting symbols on the global scope, make them export class
definitions that DevTools can then import. This should be possible, now that LightHouse is bundled and several other infrastructure improvements have been made on the DevTools side.
Can you open a separate issue for this? It seems unrelated to bundle size.
We can look into if it is possible to bundle commonjs to esmodules. We use browserify atm, but could switch to something else if needed.
Done: #11628
In a little over a year, we've add ~150KB to the bundle.
we need a lighthouse treemap diff mode :)
I used #9605 to create a treemap for the CDT bundle. https://misc-hoten.surge.sh/lh-9605/cdt-treemap.html
axe-core/lib/core/utils/valid-langs.js
in the hreflang audit. which seems to just list every(?) combo of 3 letters ... https://github.com/dequelabs/axe-core/blob/develop/lib/core/utils/valid-langs.js .Related: #9605, #10472