Closed DavidBruant closed 7 years ago
There are basically three approaches:
I don't think we're ever going to achieve wide agreement on which of these is best but it does make sense to lean one way or the other in different circumstances (eg size of polyfill, % of browsers that need it). I think we could tweak the advice to be more along those lines.
@DavidBruant would language like this help?
Shipping unnecessary bytes to users that don't need them wastes the user's time and data allowance, but performing feature detection first can delay the loading of necessary polyfills. It is generally better to optimise for modern browsers, so performing client-side feature detection and waiting for an extra script to load on older browsers is usually a good trade off. However, if the full set of polyfills you might need in the worst case constitutes a negligible overhead, then you could choose to serve the full set to all browsers.
sounds good to me
Wording added in #26, but example to be discussed in MIT F2F
I would argue that feature detection for the sake of not sending unnecessary bytes leads to counter-productive results. Indeed, even in the best case, sending the correct polyfills can only go like this:
The website code can be sent in the middle, but has to wait until the second round-trip is over before being parsed and executed.
I believe the cost of network round-trip is always significantly bigger performance-wise than the cost of unnecessary bytes.
The User-Agent-based technique does not suffer from this problem.