h5bp / html5-boilerplate

A professional front-end template for building fast, robust, and adaptable web apps or sites.
https://html5boilerplate.com/
MIT License
56.21k stars 12.18k forks source link

script loading solution #28

Closed paulirish closed 12 years ago

paulirish commented 13 years ago



This issue thread is now closed.

It was fun, but the conversations have moved elsewhere for now. Thanks

In appreciation of the funtimes we had, @rmurphey made us a happy word cloud of the thread.

Enjoy.





via labjs or require.

my "boilerplate" load.js file has LABjs inlined in it, and then uses it to load jquery, GA, and one site js file. if it helps, I have an integrated RequireJS+jQuery in one file: http://bit.ly/dAiqEG ;)

also how does this play into the expectation of a build script that concatenates and minifies all script? should script loading be an option?

paulirish commented 13 years ago

kyle: "@paul_irish i don't agree. http://bit.ly/9IfMMN cacheability (external CDN's), parallel downloading, script change-volatility..."

paulirish commented 13 years ago

james burke: "@paul_irish @fearphage @getify RequireJS has build tool to do script bundling/minifying, so can have best of both: dynamic and prebuilt"

3rd-Eden commented 13 years ago

The easiest way for developers to get started with script loading would probably be using $Lab.js, because it's already using chaining syntax that allot of jQuery users are familiar with.

If they are building big enterprise apps they can always migrate to require.js if needed.

chuanxshi commented 13 years ago

currently there are three main script loading techniques:

  1. HeadJS
  2. ControlJS
  3. LABjs

use it or not, which one to use is kinda debatable: http://blog.getify.com/2010/12/on-script-loaders/

X4 commented 13 years ago

There is also requireJS and EnhanceJS just to let you know the alternatives to HeadJS ControlJS and LabJS. Even Yahoo and google offer something similar.

scottwade commented 13 years ago

With the release of jQuery 1.5 and deferreds -- http://www.erichynds.com/jquery/using-deferreds-in-jquery/ , Boris Moore's utilizing them in DeferJS, a new script loader project: https://github.com/BorisMoore/DeferJS

stereobooster commented 13 years ago

By default script loading stops all other downloads, so downloading modernizr in the header is bad. Inlining loader make sense, because loaders can download script in parallel and in not blocking mode. For example if you do not need all modernizr features, you can inline head.min.js which is only 6kb or custom build of modernizr (http://modernizr.github.com/Modernizr/2.0-beta/). Inlining CSS sometimes make sense too. Google uses inlining, they inline css, js and empty 1x1 gifs through datauri.

peterbraden commented 13 years ago

LabJS is becoming pretty widely used and is a good solution - also it can be included asynchronously so doesn't need to block.

http://blog.getify.com/2010/12/on-script-loaders/ is by the author

paulirish commented 13 years ago

http://yepnopejs.com/ just went 1.0 and doesn't break in new webkit, unlike LAB and head.js. Script loading is hard.

yepnope is also integrated into Modernizr as Modernizr.load.. http://modernizr.github.com/Modernizr/2.0-beta/

So we'll probably have a script loader in h5bp by way of Modernizr.load pretty soon.

I don't think it'll make 1.0 but once i take Modernizr up to 1.8 we'll toss that into h5bp 1.1. Yeeeah

hokapoka commented 13 years ago

Hi Paul

I've porting an existing site to use your H5BP and I want to use the yepnope.js script loader. It's really nice to see it all the bits and bots put together as you have done.

What would you recommend using at the moment?

  1. Include yepnope.js along with modernizer.js at the top of the page
  2. Include it at the bottom of the page, to load after the HTML has finished loading.
  3. Use the beta version of modernizer.js
  4. I could concatenate yepnope.js with modernizer.js into one include.

Regardless of how best to include it, how do you recommend loading the scripts with yepnope,js?

I figure we should be doing it around here : https://github.com/paulirish/html5-boilerplate/blob/master/index.html#L52 and use yepnope to load the CDN / Local copy of jQuery and our other scripts.

But, do you think it's best to use an external script include or render a script block within the html, which then loads the scripts via yepnope.js?

Many thanks.

Andy

hokapoka commented 13 years ago

Oh and another thing.

As yepnope can load css via, I would say it's best to include the main css as you would normally and use yepnope to only include css for specific fixes.

For example including some css that is only applied to older versions of IE.

paulirish commented 13 years ago

hokapoka,

Use the beta version of modernizr.. just include what you need (and include Modernizr.load()) and then put that at the top of the page.

the actual code for the jquery fallback with yepnope is on http://yepnopejs.com/

And yes i like your idea of the conditional load of IE css.

paulirish commented 12 years ago

tbh there is too much blind faith around script loaders wrt performance and i dont think we're ready to say THIS IS THE RIGHT WAY.

we need more research around filesizes, bandwidth and network conditions that indicate smart recommendations on script loading but right now the field is nascent and we'd be naive to recommend a blanket solution of script loading.

so.

closing this ticket and asking anyone who cares to do the comprehensive research and publishing required to make it easier for developers to make a smart choice about this one

getify commented 12 years ago

i have done quite a bit of research about concat vs. parallel load. i still, without reservation, make the recommendation to combine all js into one file first, then chunk it up into 2-3 ~equal sized chunks, and load those in parallel.

I'd love to be able to take my research and make it wide spread and to scale, so that it was viable as "fact" in this area. The problem is I've tried and tried to find hosting bandwidth where it won't cost me lots of $$ to actually run the tests at scale, and have failed to find that hosting provision yet.

If I/we can solve the bandwidth issue for testing, I have the tests that can be run to find out if the theory of parallel loading is in fact viable (as I believe it is).

paulirish commented 12 years ago

@getify what do you need as far as a testing rig?

SlexAxton commented 12 years ago

I can do about 1.5TB more data out of my personal server than I'm currently using. I have Nginx installed and that can handle somewhere around 4 trillion quadrillion hits per microsecond. I don't feel like the technology is the barrier here.

If we're worried about locations, we can spoof higher latency, and/or find a couple other people with a little extra room on their boxes.

getify commented 12 years ago

BTW, I take a little bit of issue with "blind faith".

It is easy, provable, and almost without question true that if you have an existing site loading many scripts with script-tags, using a parallel script loader (with no other changes) improves performance. This is true because even the newest browsers cannot (and never will, I don't think) unpin script loading from blocking DOM-ready. So even in best case browser loading, if there's no other benefit, drastically speeding up DOM-ready on a site is pretty much always a win (for users and UX).

Your statement is a little bit of a false premise because it assumes that we're trying to compare, for every site, parallel-loading to script-concat. Most sites on the web don't/can't actually use script-concat, so really the comparison (for them, the majority) is not quite as nuanced and complicated as you assume. If they don't/can't use script-concat (for whatever reason), the comparison is simple: parallel-loading is almost always a win over script tags.

If they are open to script-concat (or already use it), then yes, it does get a bit more nuanced/complicated to decide if parallel-loading could help or not. But script-concat is not a one-size-fits-all silver bullet solution either, so there's plenty of sites for whom parallel-loading will remain the preferred and best approach.

Just because some sites deal with the nuances/complexities of deciding between parallel-loading vs. script-concat doesn't mean that the greater (more impactful) discussion of parallel-loading vs. script tags should be lost in the mix. The former is hard to prove, but the latter is almost a given at this point.


All this is to say that, all things considered, IMHO a boilerplate should be encouraging a pattern which has the biggest impact in a positive direction. If 80% of sites on the internet today use script tags, most of which would benefit from moving from script tags to parallel-loading, then parallel-loading is a very healthy thing to suggest as a starting point for the boilerplate.

It's a much smaller (but important) subsection of those sites which can potentially get even more benefit from exploring script-concat vs. parallel-loading. But a minority use-case isn't what should be optimized for in a boilerplate.

Just my few cents.

getify commented 12 years ago

@paulirish @slexaxton --

As far as bandwidth needs, I estimated that to get 10,000 people (what I felt was needed to be an accurate sampling) to run the test once (and many people would run it several times, I'm sure), it would be about 200GB of bandwidth spent. For some people, that's a drop in the bucket. For me, 200GB of bandwidth in a few days time would be overwhelming to my server hosting costs. So, I haven't pursued scaling the tests on that reason alone.

Moreover, I have more than a dozen variations of this test that I think we need to explore. So, dozens of times of using 100-200GB of bandwidth each would be quite cost prohibitive for me to foot the bill on. I didn't want to start down that road unless I was sure that I had enough bandwidth to finish the task.

They're just static files, and the tests don't require lots of concurrent users, so there's no real concerns about traditional scaling issues like CPU, etc. Just bandwidth, that's all.

We can take the rest of the discussion of the tests offline and pursue it over email or IM. I would very much like to finally scale the tests and "settle" this issue. It's been hanging around the back of my brain for the better part of a year now.

paulirish commented 12 years ago

I can do unlimited TB on my dreamhost VPS so this won't be a problem. right now i'm doing 72gb/day and can handle way more. :)

SlexAxton commented 12 years ago

I agree with paul, and think there is quite a bit of misinformation about how and when script-loaders are going to be of any benefit to anyone.

Your first paragraph says it's 'easy', 'provable' and 'without question' that script loaders improve performance.

I made a similar postulation to @jashkenas a while back, and he and I put together some identical pages as best we could to try and measure performance of our best techniques. He's a fan of 100% concat, and I tried 2 different script loading techniques.

https://github.com/SlexAxton/AssetRace

The code is all there. Obviously there wasn't a huge testing audience, but the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed), and at worst showed that script-loaders varied much more and were generally slower within a margin of error. Feel free to fork and find a solution that beats on or both of ours, even if it's just on your machine in one browser.

As for the "false premise" because h5bp assumes that people concat their js. This argument is entirely invalid because h5bp offers a script build tool, complete with concat and minification. So the argument that parallel-loading is almost always a win over multiple script tags may be true, but it's not better than what h5bp offers currently. That is the context of this discussion.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading (of their 19 JS and 34 CSS files), as well as introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

I think in the spirit of giving people the most sensible and performant and compatible default for a boilerplate, a build tool goes a lot further to ensure all three.

getify commented 12 years ago

@slexaxton

... the results at best showed that this script-loader was about the same speed as the concat method (with your similar sized 3 file parallel load guidelines followed)...

I'll happily find some time to take a look at the tests you put together. I'm sure you guys know what you're doing so I'm sure your tests are valid and correct.

OTOH, I have lots of contradictory evidence. If I had ever seen anything compelling to suggest that parallel script loading was a waste or unhelpful to the majority of sites, I would have long ago abandoned the crazy time sink that is LABjs.

I can say with 100% certainty that I have never, in 2 years of helping put LABjs out ther for people, found a situation where LABjs was slower than the script tag alternative. Zero times has that ever occured to me. There've been a few times that people said they didn't see much benefit. There've been a few times where people were loading 100+ files and so the crazy overhead of that many connections wiped out any benefits they might have otherwise seen. But I've never once had someone tell me that LABjs made their site slower.

I have literally myself helped 50+ different sites move from script tags to LABjs, and without fail sites saw performance improvements right off the bat. Early on in the efforts, I took a sampling of maybe 7 or 8 sites that I had helped, and they had collectively seen an average of about 15% improvement in loading speed. For the 4 or 5 sites that I manage, I of course implemented LABjs, and immediately saw as much as 3x loading speed.

Of course, when LABjs was first put out there, it was state-of-the-art for browsers to load scripts in parallel (only a few were doing that). So the gains were huge and visible then. Now, we have almost all browsers doing parallel loading, so the gains aren't so drastic anymore.

But the one thing that is undeniable is that browsers all block the DOM-ready event for loading of script tags. They have to because of the possibility of finding document.write(). Parallel script loading is essentially saying "browser, i promise you won't have to deal with document.write, so go ahead and move forward with the page".

Take a look at the two diagrams on slide 10 of this deck:

http://www.slideshare.net/shadedecho/the-once-and-future-script-loader-v2

Compare the placement of the blue line (DOM-ready). That's a drastic improvement in perceived performance (UX), even if overall page-load time (or time to finish all assets loading) isn't any better.

...h5bp offers a script build tool...

The faulty assumption here is that just because h5bp offers this tool, that all (or even most) users of h5bp can use it. Even if 100% of the users of h5bp do use it, that doesn't mean that if h5bp were rolled out to the long-tail of the internet, that all of them would use that concat tool. There are a bunch of other factors that can easily prevent someone from using that. There are very few reasons why someone can't move from using script tags to using a parallel script loader.

As such, parallel script loading still offers a broader appeal to the long-tail of the internet. It still is easier for the majority of sites that do not use script loading optimizations, to move from nothing to something, and that something offers them performance wins. Few of those long-tail sites will ever spend the effort on (or have the skill to experiement with) automated script build tools in their cheap $6/mo, mass shared hosting, non-CDN'd web hosting environments.

I think the worst case scenario are people taking something like yepnope or lab.js and using it as a script tag polyfill. That's absolutely going to result in slower loading...

I could not disagree with this statement more. LABjs is specifically designed as a script tag polyfill. And the improvements of LABjs over regular script tags (ignore script concat for the time being) are well established and have never been seriously refuted. If you have proof that most (or even a lot of) sites out there using LABjs would be better off going back to script tags, please do share.

There is absolutely no reason why parallel script loading is going to result in slower loading than what the browser could accomplish with script tags. That makes no sense. And as I established above, script tags will always block DOM-ready, where parallel script loading will not.

introduce a slew of backwards and forwards compatibility issues that they'll be completely unaware of.

What compatibility issues are you talking about? LABjs' browser support matrix has absolutely the vast majority of every web browser on the planet covered. The crazy small sliver of browsers it breaks in is far outweighed by the large number of browsers it has clear benefits in.

LABjs 1.x had a bunch of crazy hacks in it, like cache-preloading, which indeed were major concerns for breakage with browsers. LABjs 2.x has flipped that completely upside down, and now uses reliable and standardized approaches for parallel loading in all cases, only falling back to the hack for the older webkit browser. In addition, LABjs 2.x already has checks in it for feature-tests of coming-soon script loading techniques (hopefully soon to be standardized) like "real preloading".

I can't speak definitively for any other script loaders -- I know many still use hacks -- but as for LABjs, I'm bewildered by the claim that it introduces forward or backward compatibility issues, as I think this is patently a misleading claim.

getify commented 12 years ago

to elaborate slightly on why i intend for LABjs to in fact be a script tag polyfill...

  1. older browsers clearly are WAY inferior at handling script tags loading that parallel loading can handle. it was in those "older browsers" (which were the latest/best when LABjs launched 2 years ago) that we saw the ~3x page-load time improvements. almost by definition, that makes LABjs a better script tag polyfill, since it brings a feature (ie, performance of parallel loading) to browsers which don't support it themselves.
  2. newer browsers are obviously a lot better. but they haven't completely obviated the benefits of script loaders. chrome as recently as v12 (i guess they finally fixed in v13 it seems) was still blocking image loads while script tags finished loading. even with the latest from IE, Firefox and Chrome, they all still block DOM-ready while scripts are dynamically loading, because they all still have to pessmisitically assume that document.write() may be lurking.

So, for the newer browsers, LABjs is a "polyfill" in the sense that it's bringing "non-DOM-ready-blocking script loading" to the browser in a way that script tags cannot do. The only possible way you could approach doing that in modern browsers without a parallel script loader would be to use script tags with defer (async obviously won't work since it doesn't preserve order). However, defer has a number of quirks to it, and its support is not widespread enough to be a viable solution (the fallback for non-defer is bad performance). So you could say that, in the very most basic case, LABjs is a polyfill for the performance characteristics of script tag defer (although not exactly).

jaubourg commented 12 years ago

Honestly, I still think we should petition standards for a script loading object. Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require) is jumping a lot of hoops for nothing and performance will vary depending of too much variables. I can understand we still load stylesheets using dom node insertion (but that's only because of order) but when it comes to script, I think it doesn't make sense at all anymore (I wish google would stop using document.write in most of their scripts but that's another story entirely).

Also, I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand rather than load everything up-front (even with everything in cache, parsing and initializing takes time and it can get pretty ugly with a non-trivial ammount of concatenated scripts). Having some wait-time after a UI interaction is much less of a problem than having the browser "hang" even a little at start-up (DOM may be ready all-right, but what good is it if the code to enhance the page and add iteraction hasn't been executed yet: ever noticed how some sites load immediately then something clunky occurs?).

So strict performance measurement is all fine and dandy, but I still think perceived performance is the ultimate goal... and is sadly far less easy to estimate/optimize/compute.

mason-stewart commented 12 years ago

This is intense.

getify commented 12 years ago

@jaubourg--

Honestly, I still think we should petition standards for a script loading object.

There is much petitioning going on regarding how the standards/specs and browsers can give us better script loading tech. First big win in this category in years was the "ordered async" (async=false) that was adopted back in Feb and is now in every major current-release browser (exception: Opera coming very soon, and IE10p2 has it).

The next debate, which I'm currently in on-going discussions with Ian Hickson about, is what I call "real preloading". In my opinion, "real preloading" (which IE already supports since v4, btw) would be the nearest thing to a "silver bullet" that would solve nearly all script loading scenarios rather trivially. I am still quite optimistic that we'll see something like this standardized.

See this wiki for more info: http://wiki.whatwg.org/wiki/Script_Execution_Control

Having to create a script tag of a different type than text/javascript to trigger the cache (or worse, use an object tag or an image object or whatever a new version of a popular browser will require)

This is called "cache preloading", and it's an admitted ugly and horrible hack. LABjs way de-emphasizes this now as of v2 (only uses it as a fallback for older webkit). Other script loaders unfortunately still use it as their primary loading mechanism. But 90% of the need for "cache preloading" can be solved with "ordered async", which is standardized and isn't a hack, so well-behaved script loaders should be preferring that over "cache preloading" now.

So, I agree that "cache preloading" sucks, but there's much better ways to use document.createElement("script") which don't involve such hacks, so I disagree that this is an argument against continuing to rely on the browser Script element for script loading. If we can get "real preloading", the Script element will be everything we need it to be. I honestly believe that.

I think we're missing the biggest point regarding script loaders here: to be able to load js code on-demand

Very much agree that's an important benefit that script loaders bring. But it's sort of a moot argument in this thread, because the "script concat" folks simply cannot, without script loading, solve the use-case, so it makes no sense to "compare" the two. You can say as a "script concat" proponent "fine, we don't care about that use case", but you can't say "we can serve that use-case better using XYZ".

Perceived performance is huge and important, I agree. On-demand loading is a huge part of making that happen. On-demand loading will also improve real actual performance (not just perception) because it tends to lead to less actually being downloaded if you only download what's needed (few page visits require 100% of the code you've written).

Perceived performance is also why I advocate the DOM-ready argument above. Because how quickly a user "feels" like they can interact with a page is very important to how quick they think the page is (regardless of how fast it really loaded). That's a fact established by lots of user research.

aaronpeters commented 12 years ago

Gotta love the passionate, long comments by @getify Kyle ...

If I can contribute in any way to the research, I would love to. Bandwidth (costs) doesn't seem to be the problem, so @getify, what do you propose on moving forward? Do not hesitate to contact me via email (aaron [at] aaronpeters [dot] or twitter (@aaronpeters)

jaubourg commented 12 years ago

@kyle

Yep, I followed the script tag "enhancements" discussion regarding preloading and I just don't buy the "add yet another attribute on the script tag" approach as a viable approach. I've seen what it did to the xhr spec: a lot of complexity in regard to the little benefit we get in the end.

What's clear is that we pretty much only need the preloading behaviour when doing dynamic insertion (ie. doing so in javascript already) so why on earth should we still use script tag injection? It's not like we keep the tag there or use it as a DOM node: it's just a means to an end that has nothing to do with document structure.

I'd be much more comfortable with something along those lines:

window.loadScript( url, function( scriptObject ) {
    if ( !scriptObject.error ) {
        scriptObject.run();
    }
});

This would do wonders. It's easy enough to "join" multiple script loading events and then run those script in whatever order is necessary. It also doesn't imply the presence of a DOM which makes it even more generic. I wish we would get away from script tag injection altogether asap. Beside, it's easy enough to polyfill this using the tricks we all know. It's also far less of a burden than a complete require system (but can be a building brick for a require system that is then not limited to browsers).

That being said, I agree 100% with you on perceived performance, I just wanted to point it out because the "let's compact it all together" mantra is quickly becoming some kind of belief that blurs things far too much for my taste ;)

paulirish commented 12 years ago

fwiw, defer is supported in IE4+, Chrome, Safari, and FF 3.5+. Not supported in Opera.

So that means.... 98.5% of users have script@defer support already.

paulirish commented 12 years ago

@getify

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

aaronpeters commented 12 years ago

Do scripts with defer execute before or after DOM ready event fires?

Is execution order preserved in all browsers?

How about exec order and coupling external with inline scripts?

getify commented 12 years ago

@paulirish--

...98.5% of users have script@defer support already.

support may be there in that many browsers, but that doesn't mean it's reliable in that many browsers. that's what i meant. (see below)

However, defer has a number of quirks to it,

details plz? i haven't seen anything about this

Lemme see... IIRC:

  1. support of defer on dynamic script elements isn't defined or supported in any browser... only works for script tags in the markup. this means it's completely useless for the "on-demand" or "lazy-loading" techniques and use-cases.
  2. i believe there was a case where in some browsers defer'd scripts would start executing immediately before DOM-ready was to fire, and in others, it happened immediately after DOM-ready fired. Will need to do more digging for more specifics on that.
  3. defer used on a script tag referencing an external resource behaved differently than defer specified on a script tag with inline code in it. That is, it couldn't be guaranteed to work to defer both types of scripts and have them still run in the correct order.
  4. defer on a script tag written out by a document.write() statement differed from a script tag in markup with defer.

I don't have a ton of details ready at my fingertips on these issues. I recall about 2 years ago (before LABjs) trying to use defer, and running into enough of them in cross-browser testing that I basically set it aside and haven't really re-visited it much since.


I should also point out that defer is not really the same thing as what LABjs (and other parallel loaders) provide. I said that above with the caveat that it's only sorta like it. In fact, what parallel script loading provides (at least, for LABjs' part), is "ordered async", which has absolutely no way to be achieved only through markup.

The difference between "ordered async" and "defer" is that "ordered async" will still start executing as soon as the first requested script is finished loading, whereas "defer" will wait until the `DOM-ready before starting executions. For a simple page with little markup and no other blocking markup calls (like other script tags), this difference is small. But for a page with lots of resources, when scripts are allowed to start executing can be drastically different.

So, I'd honestly like to not get too much off on the tangent of defer, because in reality it's not a great comparison to what parallel script loading provides. It was just the closest example in markup-only that I could use to describe the execution ordered behavior I was getting at. I probably shouldn't have even brought defer up -- just muddies the discussion.

Let me just rephrase from above: "For modern browsers, LABjs is a kind of 'polyfill' for 'ordered async' behavior, which is not possible to opt for in markup-only in any browser."

aaronpeters commented 12 years ago

I like "ordered async", that's a good phrase.

Kyle > afaik, scripts with defer will execute before onload, even before domready. Scripts with async attribute will execute asap, and always before onload, but not necessarily before domready

getify commented 12 years ago

@aaronpeters-- I think you may be slightly off track. Here's how I understand it:

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute. I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

defer on the other hand specifically means: wait until after DOM-ready. Moreover, there's a "queue" of all scripts with defer set on them, such that the queue is not processed until after DOM-ready This means they should all execute strictly after DOM-ready (or, rather, after the DOM is ready and finished parsing, to be exact). But they may be delayed even further (if loading is going slowly). They should hold up window.onload though. I just recall from vague past memory that in some versions of IE the actual practice of this theory was a bit fuzzy.

jaubourg commented 12 years ago

@getify

Didn't want to derail this thread even more so I posted my thought on script preloading and your proposal on the WHATWG page here: http://jaubourg.net/driving-a-nail-with-a-screwdriver-the-way-web

mathiasbynens commented 12 years ago

async scripts (whether in markup or dynamically created) will execute ASAP, meaning any time before or after DOM-ready. In otherwords, async scripts should wait on nothing (except the JS engine availability itself). However, if they are requested before window.onload, then in almost all browsers they will "hold up" the window.onload event until they load and execute.

This is probably easier to understand once you realize JavaScript is single threaded. (I know it took me a while…)

Similarly, if you use setTimeout(fn, 0) to download resources, and they enter the download queue before onload fires, then loading these resources will (still) delay onload.

I think there was a documented case where the async scripts didn't hold up window.onload, just not remembering the exact details.

I’d love to get more info on this. Please remember! :)

artzstudio commented 12 years ago

Yay script loaders!

A problem I have had implementing them across AOL's network of sites is dealing with race conditions. For example, loading jQuery asynchronously in the head, then say a jQuery plugin midway in the document asynchronously delivered inside a blog post.

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible. It also supports deferring to ready or load, and caching of scripts. Most ideas are borrowed (stolen) by people on this thread, so thanks guys. :)

Since you were discussing benchmarks I figured I'd share a test page I created to understand differences in performance, syntax and behavior of the various script loaders out there, check it out here:

http://artzstudio.com/files/Boot/test/benchmarks/script.html

To see how various loaders behave, clear cache, and watch the network requests and the final time as well as the order that the scripts execute in.

aaronpeters commented 12 years ago

Dave (@artzstudio), txs for sharing your thoughts and the link to your test page.

Question: why do you load LABjs on the '<script> tag in head' page? That seems wrong.

aaronpeters commented 12 years ago

@artzstudio also, you are using an old version of LABjs. Is that intentional? If so, why?

artzstudio commented 12 years ago

@aaronpeters At AOL we have scripts like Omniture an Ad code (and more) that need to go in the head, so thats where the loader library goes in our use case. Also when scripts are at the bottom, there's a FOUC issue in some of our widgets so the sooner dependencies load (like jQuery) the better.

It was not intentional, this test is a couple months old. I'll update the libraries when I get a chance.

aaronpeters commented 12 years ago

FYI (hope this is somewhat interesting/relevant), I ran a few test on Webpagetest.org to see what happens in IE8 when loading some of the @artzstudio test pages. Script tags: http://www.webpagetest.org/result/110810_C7_752b756180e132f50a3ef065e9e059ca/ Yepnope: http://www.webpagetest.org/result/110810_8S_a53f4ed2e16179c328fc57c572e71076/ LABjs: http://www.webpagetest.org/result/110810_ZV_1ece92044799e52ed5199aed6b407133/ RequireJS: http://www.webpagetest.org/result/110810_Z3_a1537e41a0b0570286151859973d0cfa/

Video comparing Yepnope and LABjs: http://www.webpagetest.org/video/view.php?id=110810_074cb94c1b04a7ac9bd6538ec3fdd8d3c07f842d

Some notes:

For these two reasons I created a video only showing Yepnope and LABjs.

What I find interesting is that the Start Render time is a lot better for LABjs. Why is that? Would love to better understand.

Closing remark: I am not posting this with the objective to favor LABjs over Yepnope or anything like that. Just sharing data ...

artzstudio commented 12 years ago

Oh sorry I see what you meant about the LABjs in <script> test. Fixed now, along with an upgrade to LABjs.

getify commented 12 years ago

@artzstudio--

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible

So, were you aware that this is exactly what LABjs is designed for and does very well (if I do say so myself)? What I mean is, did you just want a different API, or what about the parallel script loading functionality was not sufficient?


In any case, as much as I love to brag about LABjs, I don't think it's effective to bog down this thread with "look, my script loader is better at X" type of discussions. Those discussions are useful, but elsewhere.

In the end, all script loader technology boils down to a few simple ideas. No matter what kind of fancy API you layer on top of it, or what use-cases you cater to, the tech is the same. There must be 50 different script loaders these days, and really, none of them provide anything different in terms of the tech, just different APIs. So, comparing APIs is really a rather irrelevant discussion to be having.

What we should focus on is, can the base script loading technology we currently have available in browsers be used to improve the performance of page loads compared to just using script tags in markup. This is a long held premise of mine that it's absolutely true, but that premise has been called into question in this thread. So task #1 is to answer that question.

If we find out that script tags are simply just better than script loading, then we can just stop all this madness and shut down all our projects. I suspect that will not be the case, though. ;-)

Task #2 is to find out once and for all if script-concat is always better than parallel load. Again, my premise (and my testing) show that concat'ing all files into one is good, but then you have to chunk that big file into 2-3 roughly equal pieces and parallel load those chunks. So, we really need to test that theory as well.

If we find out that script-concat is always best, then again, script loaders are still useful when you consider that most sites load scripts from more than one location (jquery from Google CDN, google analytics from google, facebook/twitter/g+ buttons, etc). So we would need to then, as task #3, determine if concat is so much better that you should host your own copies of all those files, concat'ing them together with your own code.

artzstudio commented 12 years ago

Kyle, can you view source of my example and let me know how I would instrument LabJS to execute all scripts on the page in order (even outside the chain)? I could very well have misread the API (as Paul said, script loaders are hard, %-).

On Aug 10, 2011, at 9:09 AM, getifyreply@reply.github.com wrote:

@artzstudio--

Thusly, I started my own script loader science project (Boot.getJS) to deal with this. The idea is to download all scripts in parallel and execute them in order no matter what, as soon as possible

So, were you aware that this is exactly what LABjs is designed for and does very well (if I do say so myself)? What I mean is, did you just want a different API, or what about the parallel script loading functionality was not sufficient?


In any case, as much as I love to brag about LABjs, I don't think it's effective to bog down this thread with "look, my script loader is better at X" type of discussions. Those discussions are useful, but elsewhere.

In the end, all script loader technology boils down to a few simple ideas. No matter what kind of fancy API you layer on top of it, or what use-cases you cater to, the tech is the same. There must be 50 different script loaders these days, and really, none of them provide anything different in terms of the tech, just different APIs. So, comparing APIs is really a rather irrelevant discussion to be having.

What we should focus on is, can the base script loading technology we currently have available in browsers be used to improve the performance of page loads compared to just using script tags in markup. This is a long held premise of mine that it's absolutely true, but that premise has been called into question in this thread. So task #1 is to answer that question.

If we find out that script tags are simply just better than script loading, then we can just stop all this madness and shut down all our projects. I suspect that will not be the case, though. ;-)

Task #2 is to find out once and for all if script-concat is always better than parallel load. Again, my premise (and my testing) show that concat'ing all files into one is good, but then you have to chunk that big file into 2-3 roughly equal pieces and parallel load those chunks. So, we really need to test that theory as well.

If we find out that script-concat is always best, then again, script loaders are still useful when you consider that most sites load scripts from more than one location (jquery from Google CDN, google analytics from google, facebook/twitter/g+ buttons, etc). So we would need to then, as task #3, determine if concat is so much better that you should host your own copies of all those files, concat'ing them together with your own code.

Reply to this email directly or view it on GitHub: https://github.com/paulirish/html5-boilerplate/issues/28#issuecomment-1772315

artzstudio commented 12 years ago

One would think Physics say concat is the best. Every new HTTP connection is another slow-start + a 100ms tax in worse case scenarios from the CDN.

However the truth about documents is that they can be very long. So loading one BFJS file in the head may unnecessarily slow down initialization of modules. Loading it at the end can cause annoying FOUC. There may be mobile implications to big files: http://www.yuiblog.com/blog/2010/06/28/mobile-browser-cache-limits/

I think this is the motive behind Souders' "split the payload" rule (http://oreilly.com/server-administration/excerpts/9780596522315/splitting-the-initial-payload.html). We need to do what is perceivably faster too.

And unfortunately what this boils down to is an "it depends" sort of answer, which makes this problem interesting enough to keep us all entertained.

I'm playing around with a hybrid approach where getJS calls are queued up and concat'ed periodically at a set time interval, as well as concating on a module dependency level (for example concating RequireJS dependencies instead of loading one at a time), all on the fly on the front-end.

It's a science experiment that as you point out, will hopefully be pointless soon, but is interesting nonetheless.

jashkenas commented 12 years ago

@getify: I know that this is pretty much just pissing on the parade at this point, but still.

If we find out that script tags are simply just better than script loading, then we can just stop all this madness and shut down all our projects. I suspect that will not be the case, though. ;-)

I could say a lot of things about snake oil, but a demonstration will work just as well:

http://jashkenas.s3.amazonaws.com/misc/snake-oil/labjs.html

http://jashkenas.s3.amazonaws.com/misc/snake-oil/vanilla.html

That's a page with 100k of text, 10 images, and 171k of JavaScript. The vanilla version uses a single minified file that includes jQuery, jQuery-UI, Underscore, and Backbone, as well as the timer.js file that writes out the load time results. The LABjs version loads each of the 5 (separately minified) JavaScript files using LABjs.

You'll find that not only is there no benefit to the LAB version, but that the extra HTTP requests only hurt load performance, and have to compete with other assets on the page, like images. But all of this has been said many times before...

LABjs

Vanilla

I can anticipate a counterargument about loading your scripts in bits and pieces ... but that's entirely orthogonal to the script loading technique, so please, leave it out of the discussion.

By all means, stop the madness.

madrobby commented 12 years ago

@jashkenas Full, 100% ack. Script loaders just add overhead and complication and points of failure. A single, server concatenated file loads fast(est), both in terms of pure transfer time and in terms of efficiency of JavaScript interpreters, and as a bonus, gzipping works better if you just have a single file.

miketaylr commented 12 years ago

I have to say, the LABjs version sometimes loads faster in my browser (compared to the vanilla page), but not consistently. Also onload doesn't always fire, which seems... odd.

jashkenas commented 12 years ago

Yes, gzipping gives you even more of an overall win with less HTTP requests.

And this isn't about dogma, it doesn't have to be a single JS file for the entire app -- two or three with defer are fine for finter-grained caching, as is loading more later.

paulirish commented 12 years ago

Some research by the Google Page Speed team:

http://pagespeed-velocity2011.appspot.com/#8 see slides 8-14, which lends more inconclusiveness to the discussion


I am still keen on the script @defer attribute and think that's a wise basic default, unless you plan on sinking many hours/days into perf testing of your own variations.

jashkenas commented 12 years ago

@miketaylr: Yes, please shift-refresh each page many times to get an overall feel. S3 latencies and image loads will make things a bit unpredictable -- behaving more like a real app.