h5bp / html5-boilerplate

A professional front-end template for building fast, robust, and adaptable web apps or sites.
https://html5boilerplate.com/
MIT License
56.23k stars 12.18k forks source link

script loading solution #28

Closed paulirish closed 12 years ago

paulirish commented 13 years ago



This issue thread is now closed.

It was fun, but the conversations have moved elsewhere for now. Thanks

In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.

Enjoy.





via labjs or require.

my "boilerplate" load.js file has LABjs inlined in it, and then uses it to load jquery, GA, and one site js file. if it helps, I have an integrated RequireJS+jQuery in one file: http://bit.ly/dAiqEG ;)

also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?

rkh commented 12 years ago

@getify having implemented a web server more than once: keep-alive does not affect concurrent requests in any way and only reduces the costs of subsequent requests. A split body with two subsequent requests with keep-alive is still more expensive than a single request. Having two concurrent requests for the two body parts will probably perform better, but keep in mind that the browser will only open a limited number of concurrent requests (depending on the browser and config something around 5, I think), which is fine if all you do is loading your three js files, but is, as @jashkenas pointed out more than once, an issue if you have other assets, like images or css files.

getify commented 12 years ago

@jashkenas-

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

TL;DR: both

Firstly, a lot of sites on the web are assembled by CMS's, which means that having inline script blocks strewn throughout the page is common, and VERY difficult to solve maintenance-wise by just saying "move all that code into one file". So, I think the premise that most sites can get away without having any "inline code" to run after another external script loads and executes is unlikely, at best.

Secondly, I've proven that defer acts differently with respect to DOMContentLoaded in various browsers. In some browsers, the scripts go before DOM-ready, in other browsers, they go after DOM-ready. If you have code in your scripts which relies on happening before or after DOM-ready, using defer can be a problem. It's especially true that it's a sensitive area with a lot of misunderstanding and confusion, so it quickly becomes "this is not a simple straightforward solution". It takes a lot more thought.

Thirdly, I think for a lot of sites, changing their markup to use $LAB.script() instead of &lt;script> is a lot easier than explaining to them how to install some automated (or manual) bulid process on their server. Especially if that site is on shared-hosting (most of the web is), and they don't really control much of their server, asking them to figure out build processes so that their code maintainability is not lost is... well... non-trivial.

Can these things be overcome? Yep. Of course they can. But they take a lot of work. In some cases (like the DOM-ready thing) they may take actually painstakingly adjusting your code. It takes a person with dedicated efforts and lots of expertise and passion in this area to sort it all out.

By contrast, they can get a "quick win" dropping in LABjs instead of the &lt;script> tag. There's little that they have to think about (except document.write()). Most of the time, "it just works". And most of the time, they see an immediate speed increase in page load. For most sites, that's a big win.

So, to answer your question, I'd say, as I said before, do both... First drop in LABjs, see some immediate speed increases. Now, consider strongly the benefits of using a build process to move you from 15 files down to 2 files (1 file chunked in half). When you do that (if you do that, which as I said, most won't), you can ditch LABjs if you really want. But there's no real harm (it's small and caches well, even on mobile). It'll continue to load your two file chunks well, AND it'll do so without the quirks that defer might cause.

Also, having LABjs already there makes it stupidly simple for you to do step 3, which is to start figuring out what code you can "lazy/on-demand load" later. You can't do that without a script loader. Having LABjs already there and familiar means you don't have to worry about how to load that on-demand script at all -- it's already figured out.

getify commented 12 years ago

@rkh--

I had it demonstrated to me (specifically in Apache, with toggling the Keep-Alive setting) how multiple parallel requests were affected (positively when Keep-Alive was there). I'm no expert in this area, so arguing the exact details of how it works or not is beyond me. I can say that the timing of request #2 was less that the timing of request #1, when Keep-Alive was there. How the browser and server did that, I can only make partially-informed guesses at.

A split body with two subsequent requests with keep-alive is still more expensive than a single request.

I never argued that the second request is free. I argued that the second request is not as expensive as the first request. So, if we assume that at least one request must be made, having a second request in parallel is NOT the same thing as having two completely independent connections to the same server, in terms of overhead or time costs.

By way of estimate, it seemed like Request #1 was X to service, and #2 in parallel with Keep-Alive present was 0.7X. It was explained to me that the server was able to utilize some of the existing connection overhead in servicing the second request, thereby making it a little cheaper. With Keep-Alive turned off, the second request had no such measurable decrease.


All this discussion is a seriously deep rabbit hole though. I'm no server expert. I don't have to be. I can only explain that I have actually seen (and created tests) around this exact topic... can I test that single 100k file load time vs. loading two halves of that same file in parallel, and will the second test be any measurable amount faster. As I've said, I saw, somewhere between 15-25% faster with the chunked-in-parallel test. How it did that, and managed to somehow overtake the awful "OMG HTTP RESPONSE OVERHEAD IS TERRIBLE" effect and still benefit from two parallel loadings, I guess I'm not qualified to scientifically prove. But it definitely did by obvservation.

savetheclocktower commented 12 years ago

Christ, you people type fast. I finish reading, reload the page, and there are like nine more comments.

I need help. I've tried to pinpoint exactly where in this thread we went from discussing what works best for a boilerplate HTML file to discussing whether script loaders are, in all cases, snake oil.

@getify, you should certainly defend LABjs and respond to specific criticisms made by others in the thread, but (excepting @jashkenas) I think those who criticize LABjs are doing so in order to demonstrate that it's not the best solution for a boilerplate. You argue that it's easier to convert legacy pages to LABjs than to script[defer], and that might be true, but how does that apply to a boilerplate HTML file (which is, by definition, starting from scratch)?

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script? Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults. If a user wants that purported 20-30% speed increase, she can choose to upgrade later over what the boilerplate offers, but that's not a trivial task.

Having said all that, if you guys want to carry on with the general topic ("Script Loaders: Valuable Tool or Snake Oil?"), I'll happily hang around and make some popcorn.

kornelski commented 12 years ago

@getify: I can agree that 2nd and 3rd connections might be opened faster than the first – the first one waits for DNS and possibly routing the very first packet to the server is a bit slower than routing the rest alongside the same path. In HTTPS SSL session cache helps subsequent connections a lot.

However, I don't see relevance of Keep-Alive in this situation. Subsequent requests on the same connection are started faster with Keep-Alive, but those requests are serial within the connection.

jashkenas commented 12 years ago

I'm about done here -- I just reached my "mad as hell and not going to take it anymore" moment with respect to script loaders.

That said, I think that this thread, for a flame fest, has actually been quite productive. If LABjs wants to stake out a claim for the hapless and incompetent web sites, and leave people who actually want to have their sites load fast alone, it's a great step forward.

peterbraden commented 12 years ago

dude, chill

getify commented 12 years ago

@savetheclocktower--

Fair questions.

I didn't start my participation in this thread strongly advocating for LABjs (or any script loader) to be included in h5bp. I think it's useful (see below), but it wasn't a major concern of mine that I was losing sleep over. Clearly, this thread has morphed into an all out attack on everything that is "script loading". That is, obviously, something I care a bit more about.

You say that it's designed for people who don't have fancy build processes, but you also seem to advocate concatenating, splitting into equal-sized chunks, and loading in parallel. Isn't that a task for a build script?

I advocate first for moving all your dozens of script tags to a parallel script loader like LABjs. This takes nothing more than the ability to adjust your markup. That's a far easier/less intimidating step than telling a mom&pop site to use an automated node.js-based build system, for instance.

And for those who CAN do builds of their files, I advocate that LABjs still has benefit, because it can help you load those chunks in parallel. If you flat out disagree that chunks are in any way useful, then you won't see any reason to use LABjs over defer. But if you can see why chunking may be helpful, it should then follow that a script loader may also assist in that process.

Again, it seems like the wrong choice for a boilerplate designed to give the user intelligent defaults.

The only reason I think a script loader (specifically one which is designed, like LABjs, to have a one-to-one mapping between script tags and script() calls) has a benefit in a boilerplate is that in a boilerplate, you often see one instance of something (like a tag), and your tendency in building out your page is to just copy-n-paste duplicate that as many times as you need it. So, if you have a poorly performing pattern (script tag) in the boilerplate, people's tendency will be to duplicate the script tag a dozen times. I think, on average, if they instead duplicated the $LAB.script() call a bunch of times, there's a decent chance their performance won't be quite as bad as it would have been.

That's the only reason I started participating in this thread. It's the only reason I took issue with @paulirish's "blind faith" comment WAY above here in the thread.

paulirish commented 12 years ago

Sooooooooooo yeah.


I think it's clear this discussion has moved on way past whether a script loader is appropriate for the h5bp project. But that's good, as this topic is worth exploring.


regardless, I'm very interested in reproducible test cases alongside test results.

It also seems the spec for @defer was written to protect some of the erratic behavior that browsers deliver along with it. That behavior should be documented. I can help migrate it to the MDC when its ready.

We need straight up documentation on these behaviors that captures all browsers, different connection types and network effects. I'm not sure if a test rig should use cuzillion or assetrace, but that can be determined.

I've set up a ticket to gather some interest in that https://github.com/paulirish/lazyweb-requests/issues/42

Join me over there if you're into the superfun tasks of webperf research and documenting evidence.

Let's consider this thread closed, gentlemen.

millermedeiros commented 12 years ago

Lazy loading isn't the core benefit of AMD modules as @jrburke described on his comments.. The main reason that I choose to use AMD modules as much as I can is because it improves code structure. It keeps the source files small and concise - easier to develop and maintain - the same way that using css @import during dev and running an automated build to combine stylesheets is also recommended for large projects...

I feel that this post I wrote last year fits the subject: The performance dogma - It's not all about performance and make sure you aren't wasting your time "optimizing" something that doesn't make any real difference...

And I'm with @SlexAxton, I want AMD but simple script tags are probably enough for most people. Maybe a valid approach would be to add a new setting to pick AMD project and run RequireJS optimizer instead of the concat tasks (RequireJS optimizer Ant task), that would be pretty cool and probably not that hard to implement.

benatkin commented 12 years ago

Let's consider this thread closed, gentlemen.

@paulirish What about including AMD support? Where should we discuss that?

paulirish commented 12 years ago

@benatkin open a new ticket bro.

benatkin commented 12 years ago

@paulirish OK, thanks. @jrburke would you please open up a new ticket to continue the discussion you started? I think I'll add a comment, but I don't think I can lay out a case for AMD support as well as you can.

screenm0nkey commented 12 years ago

Entertaining and informative. Thanks guys.

getify commented 12 years ago

I think someone needs to start a new script loader project and called it "Issue28". :)

GarrettS commented 12 years ago

For widest compat, fast performance can be had by putting script at bottom, minify, gzip, but don't defer. At least not until browser compatibility is consistent for a few years straight.

Bottlenecks can come from ads, too much javascript, bloated HTML, too much CSS, too many iframes, too many requests, server latency, inefficient javascript. Applications that use a lot of third party libs have problems caused by not just too much javascript, but more than that, they tend to also have many other problems, mostly bloated HTML, invalid HTML, too much css, and inefficient javascript. Twitter comes right to mind, with having two version of jQuery and two onscroll handlers that cause a bouncing right column onscroll.

The kicker is that if you know what you're doing, you can avoid those problems. You don't need things like jQuery or underscore, and so your scripts are much smaller. You write clean, simple, valid HTML and CSS. Consequentially, your pages load faster, the app is more flexible in terms of change, and SEO improves. And so then using a script loader just adds unwarranted complexity and overhead.

BroDotJS commented 12 years ago

https://github.com/BroDotJS/AssetRage

BOOM! I close the clubs and I close the threads.

aaronpeters commented 12 years ago

What a thread ... wow.

Imo, the discussion started in the context of the h5bp, which is intended to be a starting point for web devs. As such, you can state that the webdev using the h5bp will actually have clean HTML, clean CSS, a good .htaccess etc and maybe even not suffer from too many images, inefficient JS, lots of crappy third party JS etc. You know, because the web dev choosing to use the high performance h5bp and by that is concerned about performance, and will pay attention to the non-h5bp stuff that goes onto the page(s).

From the thread, and in this context, I think there is unfortunately not enough evidence to draw a final conclusion. I am with Paul on getting the research going and documenting what needs to be documented. Count me in Paul.

aaronpeters commented 12 years ago

Sidenote. I am not very familiar with AMD and from a first look, it seemds intimidating to me, or at least not something I can pick up very easily. I think most 'ordinary' web devs will agree. The stuff you see in the h5bp needs to have a low entry barrier, or it will not be used and uptake of h5bp may be slower than it could be without it. I doubt something like AMD belongs in the h5bp. Keep it simple.

aaronpeters commented 12 years ago

And another comment .... 'Putting scripts at the bottom' and 'Concatenate JS files into a single file' has been high up on the Web Perf Best Practices list for many years. So why do >90% of the average sites out there, built by in-house developers and by the top brand agencies still have multiple script tags in the HEAD? Really, why is that?

And the other 9% have a single, concatenated JS file ... in the HEAD. Rarely do I see a 'normal' site which is not built by some top web perf dev with one script at the bottom.

Devs keep building sites like they have been for years. Site owners care most about design and features, so that's what the devs spend their time on.

Changing a way of working, a build system, the code ... it has to be easy, very easy, or else it won't happen.

I have worked on many sites where combining the JS in the HEAD into a single file and loading it a bottom of BODY broke the pages on the site. And then what? In most cases, it's not simply an hour work to fix that. Serious refactoring needs to take place ... and this does not happen because of the lack of knowledge and, especially, the lack of time.

(oh right, the thread is closed...)

GarrettS commented 12 years ago

We're talking about a library build on top of jQuery and Modernizr. Says it all, really. Who uses that? Oh, shit, I forget, Twitter.com, which uses two jQuerys and also has in source code, the following:

Line 352, Column 6: End tag div seen, but there were open elements.
Error Line 350, Column 6: Unclosed element ul.
Error Line 330, Column 6: Unclosed element ul.

And the problem with expecting the browser to error correct that is that HTML4 didn't define error correction mechanisms and so you'll end up with a who-knows-what who-knows-where. Sure, HTML5 defines error handling, but it ain't retroactive -- there's still plenty of "old" browsers out there.

And speaking of shit, anyone here had a look at jQuery ES5 shims?

BTW, do you have anything to add to that statement of yours "that the webdev using the h5bp will actually have clean HTML," aaronpeters?

aaronpeters commented 12 years ago

@GarrettS ok, ok, I should have written "will probably have clean HTML"

GarrettS commented 12 years ago

:-D we can always hope!

jashkenas commented 12 years ago

Beating a dead horse, I know ... but it turns out that at the same time we were having this scintillating discussion, the current version of LABjs actually had a bug that caused JavaScript to execute in the wrong order in some browsers: https://github.com/getify/LABjs/issues/36

Oh, the irony.

brianleroux commented 12 years ago

must. resist. posting. totally. [in]appropriate. image. for. previous. statement.... aggggh! AGONY!

danbeam commented 12 years ago

My favorite part was when the dude that made dhtmlkitchen.com (currently totally messed up) started talking about markup errors.

GarrettS commented 12 years ago

That site has been transferred to Paulo Fragomeni. Yes I made it and proud of what I wrote there, as here. Go take a screenshot of your weak avatar, jackass.

GarrettS commented 12 years ago

...and after you're done with that, try to pull your head out of your ass and understand the difference between my old personal website (which is no longer maintained by me) and one that is developed by a team and financed by a profitable, multi-million dollar company (though Twitter may be worth billions AFAIK).

mason-stewart commented 12 years ago

Glad we're keeping this classy, and on topic, guys.

GarrettS commented 12 years ago

jashkenas got the relevant bits of info out early on in this discussion.

But then there was the backlash. No! It must not be! Souders said to do it! And there was the bad advice to use defer, not caring how it fails when it fails.

And then ironically, out of nowhere, there came a claim that h5bp users would be doing things properly. And this is very ironic because this comment came after comments from its supporters who evidently produce invalid markup and use a load of third party abstraction layers (and awful ones). And after the comment about using defer.

And so what does any of this have do with dhtmlkitchen.com being down? Nothing at all, obviously. That was just a weak jab from an h5bp forker who can't stand to hear criticism.

BroDotJS commented 12 years ago

Bros. Dude. Bros.

This thread is closed. Remember? You don't have to go home, but you can't flame here.

geddski commented 12 years ago

Hey y'all remember that one time when we made an epic thread where there were multiple debates, personal flame wars, people getting angry all over the place, an obscene image or two, and an all-around good time? Can't believe it was free. We should do that again sometime.