h5bp / html5-boilerplate

A professional front-end template for building fast, robust, and adaptable web apps or sites.
https://html5boilerplate.com/
MIT License
56.22k stars 12.18k forks source link

script loading solution #28

Closed paulirish closed 12 years ago

paulirish commented 13 years ago



This issue thread is now closed.

It was fun, but the conversations have moved elsewhere for now. Thanks

In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.

Enjoy.





via labjs or require.

my "boilerplate" load.js file has LABjs inlined in it, and then uses it to load jquery, GA, and one site js file. if it helps, I have an integrated RequireJS+jQuery in one file: http://bit.ly/dAiqEG ;)

also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?

marioestrada commented 12 years ago

Well labjs always loads fastest in my browser (Safari 5.1) even with shift-refresh or when elements are cached.

juandopazo commented 12 years ago

Of course using a script loader without concatenating will be slower than a concatenated script tag. That's why people (YUI, requireJS) created script loaders that load concatenated files and services that concatenate them on request (https://github.com/rgrove/combohandler).

C'mon this discussion doesn't make any sense. Script loaders are for loading scripts on demand, particularly after user interaction, for instance loading the logic behind dialog and form validation when clicking on a "log in" button.

jdalton commented 12 years ago

I have a sneaky suspicion that @jashkenas and @madrobby are oversimplifying things. Steve suggests parallel downloading has several benefits for a range of blocking issues and browsers (yes that means non-WebKit). He also mentions a strategy of loading the bare minimum JS required for dom-load tasks and then loading the rest later as needed. Because situations and dev needs vary I donno if a script loader belongs in a boilerplate (enabled by default) but I wouldn't throw the baby out with the bath water just yet.

SlexAxton commented 12 years ago

If it wasn't clear in my original post. I tend to agree (with jdalton) that there are quite a few benefits of script loaders in highly tested and specific environments that require special attention. I don't think it's an appropriate default.

jaubourg commented 12 years ago

I agree with @jdalton: there no size-fit-all loader. I personally use different script loaders depending on my actual needs and projects. Sometimes something simple like yepnope or LabJS is fine, others, RequireJS is a godsend. I'm not sure a boilerplate has to force one in. It's tricky because the idea would be for the boilerplate to make it easy to switch to a script loader... so I wouldn't throw the baby out with the bath water just yet either.

Also, @getify, pretending all script loaders actually use the same tech underneath is a very uninformed statement.

ded commented 12 years ago

For what it's worth... this

var script = document.createElement('script')
script.src = 'foo.js'
document.getElementsByTagName('head')[0].appendChild(script)

is better than this

<script src="foo.js"></script>

for the one main reason that it is non-blocking. Thus subsequent images and CSS files will need to wait until that file is downloaded with the latter version. The former, is async — This, everyone should know regardless of whether you decide to use a script loader or not.

re: "pretending all script loaders actually use the same tech underneath is a very uninformed statement."

If they're not doing it the above way, they're doing it wrong

SlexAxton commented 12 years ago

Well to be perfectly fair, using appendChild has fallen out of favor... :D

However, I added that test case to AssetRace

https://github.com/SlexAxton/AssetRace/blob/master/asyncconcat.html

It makes onload fire faster, so there could be some perceived benefit. But the finish time is about the same...

jashkenas commented 12 years ago

@ded: We're not talking about incompetent large blocking <script>'s in <head> here ... we're talking about script tags with a defer or async attribute, loaded at the end of the <body>, where there's nothing left to block.

getify commented 12 years ago

@jaubourg--

Also, @getify, pretending all script loaders actually use the same tech underneath is a very uninformed statement.

This is a complete mis-representation of what I was getting at. In fact, most script loaders are NOT doing the things that I think they should be doing (and that LABjs now is) in terms of using the best tech. My point was, even if all of them did use the best tech, there's still a finite limit to what we can do tech-wise. I'm pretty sure there are no loaders out there which are using some magic silver bullet that LABjs is unaware of or not using. You can't get light to go faster than the speed of light, no matter what you do or how you fiddle with the numbers.

Arguing about the tech underneath (by way of saying "hey, look at my cool and better API") is pointless. The best tech in script loading is a known finite quantity (even if a lot of loaders are irresponsible and not using it). We can push for better tech (which I am), but debating who has the better API on their loader does nothing for that goal.


This thread really seems to have the most point in trying to determine if script tags are good enough on their own (with or without defer) or if script loaders assist in getting better performance. Secondarily, we need to figure out if concat really is the end-all-be-all of script loading.

It's also a moot point (for this thread) that script loaders have all these other use-cases they can do, which markup script tags cannot do (like on-demand/lazy-loading). Again, that's basically a given at this point, so trying to re-establish that fact is pointless.

gf3 commented 12 years ago

NO U

brianleroux commented 12 years ago

LOAD RAGE!

loaders make me rage

View original posting here.

Also, any ppl/persons offended by a cartoon penis: welcome to the internet! I highly recommend you start your journey here.

getify commented 12 years ago

OK, I've created 3 tests to illustrate some points. First up, manual script tags (as the base-line):

http://labjs.com/dev/test_suite/test-script-tags.php

Notice that the DOMContentLoaded (aka, "DOM-ready") comes way late, after the scripts finish. This is the bad. While the actual load time of the page may be the same as the later tests, the perceived load time of the page will always be much slower if DOM-ready is being blocked (so many sites wait until DOM-ready to attach click behaviors, apply JS-driven enhancements to the content, etc).

Now, what happens is we use defer on our script tags:

http://labjs.com/dev/test_suite/test-script-defer-tags.php

Well, that's good, we've fixed the DOMContentLoaded delay problem, but now we have another problem. The inline script block doesn't work with defer. It executes immediately. Oops. BTW, this is not a bug, the spec specifically dictates this.

http://labjs.com/dev/test_suite/test-LABjs.php

The LABjs test gets basically the same (or better) performance numbers compared to the defer test, but it doesn't fail to get the inline code to run after the scripts finish.

Try those tests several times in modern browsers (Chrome 13, FF5, etc). For me, LABjs always performed about the same or better to the defer test. In all my attempts, I've never seen LABjs perform worse that then defer test. Try those tests in older browsers (like FF3.5 or IE7), and you'll see that the script loader starts to out-perform the other tests by noticeable amounts.

Even though the LABjs test has similar numbers to the defer test in the newest browsers, it's a deal breaker if defer can't be used to defer ALL code (only works for code that is loaded via an external file). LOTS of sites load scripts and then have inline code to activate/init the code they just loaded. defer offers us no solution for this.

Therefore, defer is unsuitable as a "general script loading" technology. The next best option is a script loader.

jaubourg commented 12 years ago

@getify

It's not all about micro-optimizations... and, YES, API is important and is generally a good indication of the kind of limitations the underlying tech has, mainly because of said micro (or even macro) optimizations. It's not always just about loading scripts. Complex dependency management, proper sandboxing and actual, real, modularity is not something to wash out just because you don't have any interest in it. Guess what? These are actually the things that people need and page load performance can be achieved at a reasonably good level with static script tags.

It finally all boils down to script tag injection not being the proper tool for the task: it actually never was. It's just a very ugly hack. In that sense, you are actually not pushing for a better tech: you're pushing for more of the same with new kinds of caveats none of us can infer yet. Please, think for a mere second and see if it finally clicks.

What's really infuriating is that you refuse to lay out a single argument in favour of script tag injection as opposed to a proper, native, javascript API for script loading. You just ignore the whole thing. I'll save you the trouble though: there is no argument there. But, heh, we can all have some mental masturbation about the ins and outs of defer and async and feel like we're the gods of javascript, right? Or debate about 50ms optimizations as if it was actually helping anyone in this industry.

If you finally decide I'm worthy enough of an intelligent reply (as opposed to yet another LabJS advert), do so on my blog and let's keep this thread alone. Thank you.

getify commented 12 years ago

@jaubourg -- I read your post indepth last nite. I was planning to write a blog post in response, in large part commending and complimenting you for the good thoughts you presented there. Unfortunately, what you're suggesting has already been hashed out AT LENGTH by members of the discussion thread on W3C and WHATWG. You're pretty late to that party.

There were several people who supported a whole new loader API, and there were several important counter-arguments to why that likely wasn't the best way to go. Again, I was planning to write out a response to you in a careful and reasoned blog post, to help explain all that.

Too bad you have to go and be such a dick here. Now it makes me feel like that reasoned blog post will just be a waste of time. You obviously think I'm an idiot and have never considered the things you're trying to bring up. Because I haven't spend the better part of the last year absolutely obsessing about script loader technology and how to get the spec and browsers to make it better. Yeah, I'm an idiot. I clearly haven't ever thought about anything other than the script tag before.


You apparently didn't listen to the 15 times I've said that this thread had the better goal of focusing on the specific questions Paul Irish and Alex Sexton brought up: is defer good enough? is script-concat better than parallel loading?

Those are the more important questions.

Not what underlying "loader" technology is used. There's a different and better forum for discussing what the underlying loader tech is. I get it, you don't like the script tag. Fine. Go spend dozens of hours on the W3C/WHATWG list trying to get Ian and others to listen to you. They'll probably all just yawn and say "we've already hashed that out, go away."

jashkenas commented 12 years ago

@getify: Creating ridiculous strawman tests isn't going to win you points, buddy. We all know that sequential script tags block the page. We also know that having inline script blocks run before "defer"ed scripts isn't a problem in any way for real sites.

If you test in order to confirm a pre-misconception ... your tests are always going to confirm that misconception. The debate has never been about 20 script tags vs. 20 LABjs script loads. It's about intelligently trimming, concatenating, minifying, gzipping, and loading your JavaScript in as few HTTP requests as possible, and then caching it.

In the one hand, we have a reliable, browser-supported, time-tested approach, that performs demonstrably better on real-world pages; and in the other hand, we have a hacked-together "technology" that in the past has actually broken every site that used it after a browser update, that performs demonstrably worse on average, and with a far greater variance of slowness.

It's a no-brainer choice.

getify commented 12 years ago

@jashkenas--

We also know that having inline script blocks run before "defer"ed scripts isn't a problem in any way for real sites.

Uhh... I guess you haven't done view-source on about 98% of all sites on the internet, which do in fact use inline script blocks in the markup to execute/initialize the code they loaded in a prior (blocking) script tag call.

If @paulirish suggests that defer is good enough and that script loaders aren't necessary, then I feel it's important to point out why, in fact, defer IS NOT good enough.

YOU may only care about the few niche sites that you control, which you have complete ability to be highly optimized about build processes, etc. I on the other hand care about helping improve performance on the long-tail sites of the internet, the ones with half a dozen script tags (some of them inline script blocks!), where using half a dozen $LAB.script() calls would in fact likely improve the performance. That's what LABjs was always about. Just because it's not what you care about doesn't mean it isn't relevant.

The debate has never been about 20 script tags vs. 20 LABjs script loads.

The debate in this thread is about whether 3-4 script tags (with or without defer) performs worse, the same, or better than 3-4 scripts dynamically loaded using a parallel script loader. My "ridiculous strawman tests" are in fact intended to test exactly that.

geddski commented 12 years ago

In my experience script loaders shave many milliseconds off the page load time. But I think we've all missed the point here. JavaScript has some bigger problems:

I don't use RequireJS because it loads faster, although that's a nice side effect. I use it so I can organize my JS app into small modules much like I would in NodeJS. Each module clearly lists its dependencies, and uses the sandbox pattern to keep the global namespace clean. Modules (and their dependencies) can be loaded up front, or loaded on demand (on user click for example), or lazy loaded. You can really fine-tune your performance with these techniques. And RequireJS also comes with a build tool that combines and minifies all the dependencies into a single (or a handful of) gzip-ready file(s) for deployment. Solving these three issues is a huge win for me.

I can see why people would debate about using a script loader that doesn't solve these problems. If performance is the only point, and its debatable, then sure. But use an AMD module loader like RequireJS and the debate becomes irrelevant. Modules are the future of JavaScript. Dave Herman from Mozilla is working with board members from Apple and Google to add native modules to the language itself. But in the meantime we can get all the benefits by using an AMD module loader. It isn't just about performance.

jaubourg commented 12 years ago

@getify

You cannot expect people to treat you any differently than you do others. Patronizing is not a clever way to get a decent reaction (and god are you patronizing) and, like I said in my blog post, I don't think you're an idiot, I just think you're obsessed (which you say yourself btw) and that it seriously impairs your judgment. Like I said in my blog post it's not up to W3C or WHATWG to handle this issue but EcmaScript itself: this is not a browser issue, it's a language issue. Now, don't make this reply if you don't want to, it's your prerogative.

Maybe I came as harsh but I just defend what I believe in.

I'll unsubscribe from this thread and comment on it anymore. Sorry to have derailed stuff @paulirish and @SlexAxton.

jashkenas commented 12 years ago

@getify

YOU may only care about the few niche sites that you control, which you have complete ability to be highly optimized about build processes, etc. I on the other hand care about helping improve performance on the long-tail sites of the internet, the ones with half a dozen script tags (some of them inline script blocks!), where using half a dozen $LAB.script() calls would in fact likely improve the performance. That's what LABjs was always about. Just because it's not what you care about doesn't mean it isn't relevant.

If LABjs is about helping mediocre sites load slightly less poorly ... that's a noble goal, I guess. But if you're serious about taking a slow-loading website, and have it load as fast as possible -- potentially literally seconds faster than LABjs would allow, then it behooves you to keep an open mind and acknowledge that the easier and less fragile technique is also more performant.

The debate in this thread is about whether 3-4 script tags (with or without defer) performs worse, the same, or better than 3-4 scripts dynamically loaded using a parallel script loader. My "ridiculous strawman tests" are in fact intended to test exactly that.

The debate in this thread is about how to build a web site to load and execute its JavaScript as fast as possible. Selling snake oil to clients, and promoting it to web developers, is a disservice to both.

Latency exists on the internet. Concatenate, minify, and gzip your JS, and load it at the bottom of the page in as few HTTP requests as possible. Nuff said.

getify commented 12 years ago

@jashkenas--

If LABjs is about helping mediocre sites load slightly less poorly ... that's a noble goal, I guess

There are hundreds of sites that I personally know about from the past 2 years which did nothing but replace their script tags with $LAB.script() calls, and across the board they all saw better performance (some drastically, some only modestly).

There have been articles written (completely independent of and not connected to me) focused on helping sites in various industries (like ecommerce, real estate, etc) get better performance (because better performance means more conversions), where those articles recommended to sites that they replace script tags with $LAB calls, and many people in those comment threads have responded in the affirmative that it helped them out.

Had those articles said "OK, what you need to do to get more performance is hire a server admin who understands gzip and can install ruby or node.js so you can do some automated build processes......." those people reading those articles would have glazed over and left without giving it another thought. But I like to believe that "Hey, replace <script> with script()" was a pretty easy message for them to understand and connect with.

What I wanted for LABjs is a simple solution that someone can easily drop in to replace their script tags without too much thinking. I recognize that if you can personally consult with a site and figure out best optimizations, you can squeeze a lot more performance out of a lot of sites. But I also recognize that this is far beyond my ability as one person to do for the long tail of the internet, and similarly telling all those mom&pop sites "hey, go get an automated build system, and make sure it uses gzip" is like speaking an alien language to them. OTOH, it's been quite successful to say "Hey, take those 3 script tags, and make them 3 script() calls. See how easy that was?"

Bottom line, my approach with LABjs was to hit the low-hanging fruit.

None of that is to suggest that more sophisticated approaches to optimization aren't possible -- they clearly are, and when I get the chance to consult, I definitely explore them. It's just to say that for a lot of the web, it's more involved/complicated than they're willing or able to get. And I'm just trying to help those sites improve in a way that is easier for them to grasp.

getify commented 12 years ago

@jashkenas--

potentially literally seconds faster than LABjs would allow, then it behooves you to keep an open mind and acknowledge that the easier and less fragile technique is also more performant.

There has never been any established evidence to suggest that LABjs is significantly slowing down any sites. There's LOTS of established evidence that it's helping a lot of sites. So I don't buy this -- what you're speaking of is a false premise assuming facts not in evidence.

jdalton commented 12 years ago

@paulirish found a post that points out problems with the defer attribute: http://hacks.mozilla.org/2009/06/defer/

jbueza commented 12 years ago

Coming from a mobile performance perspective -- like @jashkenas said, it's always best to concatenate, gzip, and send it over the line as one package than to have multiple http requests due to latency incurred by 3g network connections.

There's a lot of research being done in utilizing inlining techniques where you base64 encode images into strings then store them as key:value pairs in localStorage just to reduce http requests and leverage 'caching': http://channel9.msdn.com/Events/MIX/MIX11/RES04 is a great presentation by James Mickens from Microsoft Research.

Here's a pretty good deck on mobile performance with http requests and its affects on user experience: http://davidbcalhoun.com/present/mobile-performance/

jrburke commented 12 years ago

I work on RequireJS, and I want to make a clarification of what RequireJS is aiming to do:

1) Show the way for modular code in JS that works well everywhere JS runs. 2) Load scripts.

The "load scripts" part is a necessary part of achieving the first goal. In dev, it is not a good idea to just concatenate all your scripts because it makes debugging harder, the line numbers do not match up. Script loaders also make it easy to use a JS API to load code on demand. For webmail-size apps, this is a necessary part of the performance story. However, concatenating the scripts into one or a small number of requests is usually the best deployment option.

But the goal of requirejs is to be the shim/polyfill/whatever to show how to create and reference modular code units that can be shared with others in a way that discourages globals and encourages explicit dependencies.

It uses the AMD API which has been worked out with other people making modular script loaders (includes compliance tests), with the goal of helping to inform any discussions for a module format in JS. This approach, by making real world implementations and reaching agreement with others on the API is the way progress is made.

In particular, given the network nature of JS and its relation to web docs/applications, the loader plugin API is something that should be supportable in some fashion with the ES Harmony modules, and I am doing work on prototyping the ES harmony modules via a requirejs loader plugin, so I can better understand the harmony proposal and give feedback.

For the performance folks:

In the context of this ticket: choosing an AMD-compliant loader (does not have to be requirejs) fits in with the goals of the HTML boilerplate: point the way to best practices, both in code and in performance. However, I appreciate trying to work out an HTML boilerplate is a very difficult thing to do, there are competing interests, some stylistic, so I appreciate not wanting to make a recommendation in this area at this time.

I just want to make it clear that the goal of requirejs and loaders that implement the AMD API provide larger benefit than just loading some scripts that dump globals and force the developer to work out the complete, sometimes implicit, dependency tree. Those goals are achieved with solutions that have solid performance profiles.

getify commented 12 years ago

To refocus from earlier... comparing the defer test to the LABjs test... (and ignoring the fact that defer doesn't work on inline script blocks), is anyone seeing that the LABjs test is performing worse than the defer test? I've tried it on a bunch of browsers, and even on my mobile device, and still seeing roughly equal numbers.

http://labjs.com/dev/test_suite/test-script-defer-tags.php

http://labjs.com/dev/test_suite/test-LABjs.php

espadrine commented 12 years ago

@getify

I have no idea why or how you can optimize this, but I have, on my 3+-year-old MacBook machine, a consistent 3000 of difference between the two, which favors @defer.

I have only tested with Firefox however.

getify commented 12 years ago

@espadrine-- quite strange. would love to get to the bottom of that. which version of Firefox are you testing with? can you send me a screenshot of the results?

devongovett commented 12 years ago

Just concatenate and minify all your JS and CSS and inline it right in your HTML page and be done with it. Single HTTP request FTW! :P

Seriously though, there are so many bigger problems that we should be focused on in this community than just how your app is going to load. Chances are, the simplest method (script tags at the bottom) is probably fast enough. Just write great apps and deal with loading performance at the end. Doing anything else is prematurely optimizing.

artzstudio commented 12 years ago

Is there a general consensus by the folks on this thread that AMD should be the gold standard for JS code organization? Haven't really seen other options but I agree the Boilerplate would be a great start to setting folks up right in organizing code.

espadrine commented 12 years ago

Firefox UX 8.0a1 (2011-08-07) update channel.

defer LABjs

Again, no idea why, and this is probably very specific. LABjs is probably very good with legacy browsers.

jashkenas commented 12 years ago

Please don't use @getify's test page for anything more than a laugh. To quote:

<script defer src="http://labjs.xhr.me/dev/test_suite/testscript1.php?_=4911710&delay=5"></script> <script defer src="http://labjs.xhr.me/dev/test_suite/testscript2.php?_=6146431&delay=3"></script> <script defer src="http://labjs.xhr.me/dev/test_suite/testscript3.php?_=9499116&delay=1"></script>

@getify, if you want to make a real test, feel free to fork @SlexAxton's AssetRace repo and add a LABjs version ... or make a test page that uses real of JavaScript files, with real latencies.

Also, make sure you actually concatenate the JS for a single script tag -- defer or not. The point is that the same content served over 1 HTTP request beats the same content served across 10 HTTP requests.

There has never been any established evidence to suggest that LABjs is significantly slowing down any sites. There's LOTS of established evidence that it's helping a lot of sites. So I don't buy this -- what you're speaking of is a false premise assuming facts not in evidence.

What was demonstrated above, is that LABjs is indeed significantly slowing down sites, by having their JS compete across many HTTP requests with their images, CSS, and other assets. @getify: I'd love to see a link to a site that you think demonstrated greatly from your conversion of it over to LABjs. Perhaps we can download a copy of that, and use it as a test case you'll respect.

SlexAxton commented 12 years ago

For the record, I think it would be wise to get some more images in the AssetRace repo test page. But it's certainly a good baseline right now.

geddski commented 12 years ago

@artzstudio organizing your JS with an AMD loader is indeed the gold standard, at least until Harmony's modules are finished and widely supported. Then there will be a clear migration path from AMD modules to Native modules.

SlexAxton commented 12 years ago

AMD modules being the gold-standard is certainly an opinion (one that I may share). However, there are plenty of smart people (Yehuda Katz and Dan Webb come to mind) who don't like it and offer other solutions.

@danwrong 's loadrunner can kind of do both, if that's your bag too: https://github.com/danwrong/loadrunner

Some pretty good stuff in there. Potentially a little more practical for non-JS folk as well. I like AMD modules for my stuff, but not everyone wants to spend time converting each version of the libraries they use to be modules.

I know @strobecorp is working on their own solution that doesn't require a lot of the extra code that AMD modules require.

While I'd love AMD to be the default, it's probably not wise from a multi-library/newb standpoint, as much as I wish it was.

getify commented 12 years ago

@jashkenas--

Please don't use @getify's test page for anything more than a laugh.

If you can't be civil, I have no desire to discuss anything further with you. I am acting in good faith. I would appreciate a little common decency.

@getify, if you want to make a real test

I'd sure like you to explain why what I'm doing is so crazy, laughable, and invalid. I took the approach directly from Steve Souders, who (in his great experience and wisdom) suggested in all his tests that you use server timing to control the scripts, reducing the amount of variance in your tests. That's exactly what I'm doing.

A more controlled test is a valid baseline test. That's established scientific practice. That doesn't mean that real-world tests aren't also useful, but it also doesn't mean that you get to snipe at me and say "laugh at him, what an idiot because he does his tests differently that i think they should be done."

feel free to fork @SlexAxton's AssetRace repo and add a LABjs version

I'll happily do so. But not because I agree that my other tests are invalid. If you have some reasoned, level-headed arguments as to why my test setup is not valid, please do share. But quit being such an ass about it.

getify commented 12 years ago

@jashkenas--

The point is that the same content served over 1 HTTP request beats the same content served across 10 HTTP requests.

I know you (and others) keep ranting on here about how this discussion should be all about concat vs. not-concat. If you read much earlier in the thread, I conceeded that there were two questions that needed to be addressed. The two issues are, as far as I'm concerned, orthagonal. The first is if script tags in markup can be as good (or better) than dynamic script elements used in a parallel script loader. THAT QUESTION is what I'm still trying to address with my tests.

The second question, which we haven't gotten to yet, is about whether script-concat is always better. I know you're already convinced of it, but I have counter evidence to suggest it's not so simple. That question needs to also be thoroughly tested. But it isn't what I'm trying to work out right now in this thread.

By continuing to insist that your way is the better way, you just make the whole debate less pleasant to be part of. All I'm trying to do is methodically establish some evidence for each of those two main questions, so we can stop guessing and be more informed. Why isn't that something you can assist with, instead of trying to be a jerk to me because you disagree with me?

getify commented 12 years ago

With respect to the defer test vs. the LABjs test, I just did a quick screencast capture of testing the two head-to-head in IE9, FF8(nightly), and Chrome15(canary).

http://www.screenr.com/icxs

To answer @paulirish's earlier question (https://github.com/paulirish/html5-boilerplate/issues/28#issuecomment-1765361) about defer quirks, look at how "DOMContentLoaded" behaves across IE, Chrome, and Firefox in the defer test.

In IE9 and Chrome15, the DOMContentLoaded event is held up (blocked) and not fired until after the scripts run. In FF, however, the DOMContentLoaded event is not held up, it fires right away, and the scripts start executing after it. That's a giant inconsistency across modern browsers, and one of the reasons why I don't think defer is sufficient.

As far as I can tell from reading the spec, I'm not sure which behavior is correct. But I do know that it's clearly quirky and inconsistent between browsers.

jashkenas commented 12 years ago

@getify I'm not trying to be a jerk. I sincerely apologize that I've hurt your feelings.

Naturally, what you see as ranting, I see as the point of the discussion ... and what I see as snake oil, you see as a helpful step forward.

The two issues are indeed orthogonal (language that I used in my original post).

The first is if script tags in markup can be as good (or better) than dynamic script elements used in a parallel script loader.

We are in complete agreement on this issue -- it doesn't matter. Of course parallel loading will be faster than sequential loading for more than one script. And of course, doing it in a non-blocking fashion, either at the end of the <body> tag, or with defer, or with a script loader, will be better than blocking in the <head>.

But this misses the point. Putting in sequential script tags is a strawman to compare against, because no one who cares about the performance of their JavaScript would use that approach. Guess what's also faster than sequential script tags? Anything.

The second question, which we haven't gotten to yet, is about whether script-concat is always better.

We have "gotten to" this question. In fact, it's @paulirish's question at the top of this page. If you're not trying to work it out in this thread, you need to be. It strikes at the heart of all your claims about what LABjs does, not just in this thread, but over the years.

That question needs to also be thoroughly tested.

To repeat myself, here's a (fair) test case. The same 5 real-world scripts, loading on to a medium-sized page with other assets present, one using LABjs best practices to ensure load order, and the other using a single concatenated script:

http://jashkenas.s3.amazonaws.com/misc/snake-oil/labjs.html

http://jashkenas.s3.amazonaws.com/misc/snake-oil/vanilla.html

If you have another test case you'd like to examine, or a real-world LABjs-using website you'd like to experiment with, please share it.

artzstudio commented 12 years ago

@SlexAxton Thanks. I'd be curious to hear Yehuda's take on it and other strong opinions (other than it's too hard to refactor). I found this but not the talk.

jrburke commented 12 years ago

To clarify @geddesign's comment: as of today it looks like AMD modules can be converted fairly easily to harmony modules, but that harmony modules proposal I consider still to be in flux, it could change later. It has not been through a rigorous implementation testing yet, but starting to get some legs on it. On the plus side, AMD loaders + loader plugins can give solid feedback into trying out some of the harmony ideas.

To @SlexAxton's comment:

For loadrunner: it is not clear to me the syntax is any better, just different. It supports AMD, so it still works out.

For strobe: I have yet to see code from them on it. They seem fairly inward-focused, although I appreciate the work Yehuda has done to open up that development. Alex, if you have pointers to what they are thinking, I would appreciate getting them.

If the approach is going to allow nested dependencies (which is needed for broad code sharing), you need a syntax that:

This is what AMD provides, and the syntax is as slim as it can get. Anything else is just fighting over names and possibly some some types of punctuation. At some point something just needs to be chosen, and so far I have not heard from Dan Webb or Yehuda about structural weaknesses that make AMD untenable. Some AMD loaders, like requirejs can load just regular scripts, they do not have to be modules.

It is very easy to think up code syntax, particularly for modules, and I can appreciate everyone has their own personal preferences. However, AMD has a fairly deep history of doing the hard work of getting some kind of agreement, and more importantly real code and deployment to back it up. I feel the onus is on others now to really be very crisp and clear on why AMD is not a good fit (this ticket is not the place for it, feel free to contact me off-list, or use the amd-implement list).

But I appreciate @SlexAxton's view. Standardizing on an AMD approach for HTML boilerplate could be premature, and I am completely fine with that. If the boilerplate project decides it does want to pick one, AMD is a strong choice that fits a wide spectrum of JS development.

geddski commented 12 years ago

@SlexAxton I'm with you. My own code is AMD all the way. While I wish everyone wrote modules instead of scripts, luckily RequireJS can load plain scripts as well as modules.

If you're referring to Yehuda's handlebars.js templating, those work extremely well with RequireJS. Especially if you write a plugin that compiles/caches the template and returns its template function.

define(['tmpl!navigation.html'], function(nav){
   $('body').append(nav(data));
});

I disagree with this statement however:

While I'd love AMD to be the default, it's probably not wise from a multi-library/newb standpoint, as much as I wish it was.

Newbs need the clean structure that AMD provides even more than a seasoned developer, as they are more prone to global variable collisions, terrible code organization that leads to huge messy JS files that nobody wants to touch for fear of having to deal with merge conflicts, etc. Libraries benefit from modules enormously, which is why upcoming Dojo 1.7 and Mootools 2.0 are moving to AMD. I hope jQuery gets on board - one of its biggest complaints is that it's "all or nothing". You can't use its excellent DOM manipulation without also loading its animation, ajax, events, etc. onto the page as well. So yeah, AMD is a win-win. If HTML5 Boilerplate wants to point people to best practices, it would be a shame to leave out AMD. It elegantly solves so many of JavaScript's problems.

SlexAxton commented 12 years ago

To be clear. I agree. I wish they used require all the way.

I just don't think they will.

artzstudio commented 12 years ago

I don't think people yet realize AMD is a buzzword, a "thing" every serious developer needs to know about. Once they do, they will want to say to their bosses and future interviews they know about it and use it.

If we all do our part and say "see, it's easy, and better, and important" and make it a buzzword, the herds will follow for the sake of their careers.

getify commented 12 years ago

@jashkenas--

The first is if script tags in markup can be as good (or better) than dynamic script elements used in a parallel script loader.

We are in complete agreement on this issue -- it doesn't matter.

Actually, I started my participation in this thread assuming that everyone agreed that dynamic script element loading was going to lead to better performance than script tags. But both @paulirish and @slexaxton have called that assumption into question in this thread.

@paulirish has suggested that defer is a sufficient way to make the plain ol' script tag as good (or better) than the dynamic script element loading alternative. I disagree that defer is sufficient, and I've established now several reasons why.

So, I think it IS valid for us to have examined the first question, and explored if defer was better than script loaders. There may be a few limited cases where you can get away with defer, but as far as the generalized case, script loaders handle/normalize all the quirks, whereas defer exposes you to those problems.

I'm still not sure that everyone sees or agrees with why defer is not sufficient.

To repeat myself, here's a (fair) test case. The same 5 real-world scripts, loading on to a medium-sized page with other assets present, one using LABjs best practices to ensure load order, and the other using a single concatenated script:

This is your (and others') false testing premise. I never ever ever ever claimed that loading 5 scripts instead of 1 was going to be faster. Never. Ever. Can I be any more clear? The premise has never been 5 vs. 1.

The first test was to test 3 script tags vs 3 script() calls, because that's a fair test. And I think the video and the tests illustrate that script loading, in THAT scenario, is beneficial.

getify commented 12 years ago

The second, and much more complex to test question, is whether there's any way to improve on the performance of a site that is already loading all its JS in one file. Most people say that it's impossible to improve on that. I disagree.

NOTE: the reason this question is orthagonal is that you can load this single concat file either with a script tag, or by using document.createElement("script") type dynamic loading. Either way, the question of a single concat file is a valid question, but separate from if script tags or dynamic script loading are better.

What you have heard me say several times in this thread, and also in many other contexts (including all my conference speaking on the topic, blog posts, etc), is that I think it's possible that you could improve on the single JS file concat approach by "chunking" (that is splitting the big concat file) into 2 or 3 chunks (at most). If the chunks are of ~equal size, and are loaded in parallel, then it's possible that the page will load faster, even with the extra HTTP overhead, because of connection "Keep-Alive", parallel loading effect, etc.

In fact I was writing about this topic a LONG time ago, way back in Nov 2009, shortly after LABjs' first release: http://blog.getify.com/2009/11/labjs-why-not-just-concat/

In that blog post, and ever since then, I've said that IF you are in a position (not everyone is... in fact, most of the web isn't) to use build-processes to concat, you should do so. Period. Always. Always concat files from 10-20 local files down to much fewer.

BUT, I also say that once you have that single concat file, it might also be beneficial to try and load your single file in 2-3 chunks, loaded in parallel (using a script loader).

Why might this be better? I lined it out in that blog post, but in short:

  1. parallel loading effect is real. ask bit-torrent users about this. the HTTP overhead is also real, and acts to counter-act, and eliminate that benefit. But it doesn't mean that it's impossible to benefit. Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties), and load your code in a shorter amount of time. Will it be 1/3 the time (60-70% faster) if you load it in 3 chunks? No. Absolutely not. But it may be 20-30% faster.
  2. Serving all your code in a single file prevents you from doing different cache headers for different life-time code. For instance, jquery is very stable and never needs to be re-downloaded. but your UX centric code on your site may be very volatile (you may tweak it once per week or more). Doing short caching headers on the single concat file is stupid, because it forces more frequent re-downloads of stable code unnecessarily. Doing long caching headers on the single concat file is also stupid, because it forces you to invalidate the cached file (cache bust param, etc) and force a full re-download of the entire file, when you just tweak a single byte of your more volatile code. So, chunking your big concat file into 2 chunks, one for the stable code, one for the volatile code, allows you to have different caching headers for each chunk. This makes more effective use of the cache, and leads to potentially better performance over periods of time, as users come repeat visit your site.
  3. Studies have shown that on average, a single page-view uses far less than 100% of the JS that gets loaded on the page (some estimates put it around 20-30% of the code). Loading all your code in one shot, all at once, at the beginning of page load, is congesting the line unnecessarily to push 70-80% of the file that is not needed then (and may "never" be needed). If you have your code in 2 chunks (one that is the more critical code and another that is less critical code), and you load the first chunk right away, and load the second chunk a few seconds after page load, you can free up the pipe for the much more important images/css and content. In essence, chunking allows you to prioritize your code.

Bottom line... on the topic of concat vs. parallel... I always tell people: both. Not one or the other.

geddski commented 12 years ago

@getify well said.

aaronpeters commented 12 years ago

Kyle's LABjs has my support. As a consultant helping sites improve performance, I have seen LABjs work well many times. Not only did it improve performance significantly (not just 100 ms, but 1+ sec), but also did the developers like it. Easy to understand, easy to implement.

And I will take this opportunity to publicly say "Thank you Kyle, for the great support on LABjs. You've exceeded my expectations several times."

kornelski commented 12 years ago

Using connection Keep-Alive, it's possible you can get 2 or 3 simultaneous connections (without 2-3 full connection overhead penalties)

HTTP doesn't mux/interleave responses, so you can't have parallel downloads without opening multiple connections first. The ideal case of persistent and pipelined connection is equal to contiguous download of a single file (+ few headers).

getify commented 12 years ago

@pornel--

I have seen first-hand and validated that browsers can open up multiple connections in parallel to a single server, where with Connection Keep-Alive in play, the overhead for the second and third connections is drastically less than for the first. That is the effect I'm talking about.

jashkenas commented 12 years ago

@getify Fantastic, I think we've reached some sort of consensus. To refresh your memory:

I can anticipate a counterargument about loading your scripts in bits and pieces ... but that's entirely orthogonal to the script loading technique, so please, leave it out of the discussion.

Yes, I agree that loading your volatile scripts in a different JS file than your permanent scripts is great. Loading the script that is only needed for a specific page, only on that specific page, is similarly great.

So if I'm a web developer and I've got a page with a bunch of JavaScripts, what should I do? Use LABjs, or concatenate my permanent scripts into one file, and my volatile scripts into another, and load both at the bottom of the body tag with <script defer="true">?

Why should I subject my app to caching headaches, browser incompatibilities, race-against-the-images-on-the-page, and the rest of the trouble that a script loader brings along?

If the entire premise of using a script loader for performance is that it's easier and simpler than using two script tags ... I've got a bridge in Brooklyn to sell you.