Closed bernhardw closed 5 years ago
@oliverjash is right...
@OliverJAsh, @lookfirst, actually I'm purposefully ensuring a clean cache with a hard refresh.
My point is caching shouldn't be necessary, or at least shouldn't be a necessary first step. 700ms is acceptable for clean cache full build. Having Dev Tools balloon that to 4000ms without reason is crazy. I'd love to know why and how to stop that.
I also wanted to inform that HTTP2 doesn't provide a perf boost (at least on localhost). It's too bad browsers don't support H2 without TLS, as I think we would see something then. :(
Dev tools give you continuous feedback on what your updated situation is. This takes time. If you're on the dom tree and animate updates, this slows you down. If you're on the network tab while loading is going on this slows you down. If your on the source tab while loading things this slows you down. This is the reason, and it's not crazy.
Now, with 900+ files loaded in development in my work setup the SystemJS load is 2-3x slower than the corresponding requirejs setup. That part is a bit worrysome and points towards something either requiring a great deal more work on the client than before or there being a choke point somewhere.
@Munter is there a speed up if you set all the module format information via metadata - System.config({ meta: { 'app/*': { format: 'amd' } } })
for example? The automatic format detection may be providing some slowdown.
@Munter, I'm not sure. I think it's an issue in DevTools. I'm not doing any DOM manipulation till the end. The extra latency seems excessive.
@guybedford Looking at the network tab, I noticed that there is very few parallel requests happening. I imagine that's because modules are being included as requested, and so walking the code base ends up being synchronized. But it's too bad, as making the bulk of the 160+ requests in parallel might solve the issue.
One suggestion: FB ships a single JS file for the React development version. It's not minified, and so is very usable for diagnosing issues. If this file was linked to by the short name in the JSPM registry, it would also solve the performance issue for React users and make the initial experience of JSPM users a lot better.
@JonahBraun you know my quite big react app reloads always under 1000ms even on slow machines with my https://github.com/capaj/jspm-hot-reloader No need to do 160+ requests every time you change one file.
One suggestion: FB ships a single JS file for the React development version. It's not minified, and so is very usable for diagnosing issues. If this file was linked to by the short name in the JSPM registry, it would also solve the performance issue for React users and make the initial experience of JSPM users a lot better.
Was about to suggest that myself. Any reason why we couldn't do that? I see the motivation for loading all the bits of React individually but it would cut out a lot of requests in dev mode.
@jackfranklin we did something similar with socket.io-client https://github.com/jspm/jspm-cli/issues/780
@capaj thanks for the link. Another great workaround. This issue itself should be fixed however. jspm should be able to at least match the performance of a compile-then-load tools. That is to say, JS in the browser is as performant as JS on the CLI, so performance should be similar.
I've done further research: the slow down is not caused by the XHR requests itself. It's something else SystemJS is doing when loading files or modules that is triggering the slowdown with DevTools.
FYI discussion on the performance numbers: JonahBraun/jspm-perf-test#1
Thanks @jackfranklin, I just followed your approach and am seeing the bundle rebuild in under 300ms for single file changes. Using browser sync in conjunction with Intellij autosave the browser updates in about a second.
Edit:
Interesting single data point for those of you who think HTTP/2 is going to magically solve all the issues...
http://engineering.khanacademy.org/posts/js-packaging-http2.htm
Khan Academy spent several months rearchitecting its system to move from a package-based scheme to one that just served JavaScript source files directly, at least for clients that support HTTP/2.0. And here's what we found:
Performance got worse.
On investigation we found out there were two reasons for this:
Our conclusion is it is premature to give up on bundling JavaScript files at this time, even for HTTP/2.0 clients. In particular, bundling into packages will continue to perform better than serving individual JavaScript source files until HTTP/2 has a better story around compression.
@lookfirst I would take that with a pinch of salt. Google app engine's webserver issues could be single source of slowdowns. And nobundling workflow's greatest strength isn't visible in a usecase of new user hitting your site. It is in a usecase of a returning user. It is in caching. HTTP2 enabled nonbundled app is cached much more efficiently, there is no argument around that.
In the future, when we all use fragmented pieces of lodash from cdn in production, we will see the benefit even for a new user hitting the site. Before that will happen, we have to make JSPM much more famous than it already is.
@capaj "Interesting single data point"
:+1:
I've played around with nginx 1.9.7 today and saw absolutely no benefit in enabling HTTP/2. In fact, it seems like page load is even slightly slower than usual (probably due to mandatory use of SSL encryption).
@adiachenko did you use depCache?
@guybedford No. It was my impression that depCache is intended for production use anyway.
For now I'm mostly interested in speeding up page load in development. I use vagrantbox with nginx because I'm back-end developer so it's quire convenient for me. By the way, enabling gzip and using native filesystem instead of virtualbox shared folders improved performance by 20%.
I'm wondering whether node based solutions would be a bit faster than nginx due to more "native" environment of execution. I already use browsersync for live-reloading scripts and injecting css on change but I haven't tried it yet (I need SPA routing and proxy pass to api so configuring it may be a bit troublesome). I'll share my results if I bring myself to test it.
@adiachenko I am under the impression that depCache should be usable in development as well as production. You should try it with depCache.
Without depCache, page load time is restricted by (latency + processing) * dependency tree depth
, so it is necessary for perf.
@capaj @guybedford it doesn't seem like depCache brings any difference for me although the app is mostly boilerplate at this point (a lot of dependencies, not so much actual code). Probably not the best target for benchmarking.
Besides bundling my dependencies adding depcache didn't really seem to matter for me, only a few ms. The browser seems mostly busy with transpiling my es6 code. I'm quite happy with the jspm / systemjs hot reloader though, that really helps. I hope we can make that workflow slightly easier in the future, not having to add hot reload script in my html for example, having that managed through an init config in config.js (that's somehow automatically added) would be awesome.
@peteruithoven I'm actually experimenting something similar with the same results... https://github.com/geelen/jspm-server/issues/41
Checkout my repo https://github.com/douglasduteil/jspm-server I published a fork on npm too https://www.npmjs.com/package/douglasduteil...jspm-server
@capaj I tried using capaj/systemjs-hot-reloader
but it doesn't work well with Docker at all (you have to use polling which kills performance). So that's too bad... (Also, I don't like the idea of hot reloading in the first place but that's just personal preference.)
@Munter: https://github.com/jspm/jspm-cli/issues/872#issuecomment-121925878
Keep track of updates and dependency graph and somehow augment all sources with cache busting hashes in their outgoing relations. I'm unsure if/how SystemJS can work with that.
What about @guybedford's suggestion here?
only cache jspm_packages, not anything else (the code you write yourself). Because paths in jspm_packages are versioned, updates get forced.
But there's a problem with that when we use the github registry with a branch name. Will changing it to use git commit hash (in directory name) solve the problem? More generally, perhaps we need to enforce this (uniquely versioned paths) as a requirement for all registries. (Otherwise how can we rely on this?)
How to deal with the fact that one needs to refresh the whole application when a file changes? i.e. there's no easy-and-proper way to partially reload angular application. So using hot-reloading (e.g. jspm-server) is not a solution.
To me the problem seems to be the following: even if I change only one typescript file, I need to reload the page, and thus the jspm reloads using typescript-plugin all of the already-up-to-date files one-at-a-time which takes uncomfortably large amount of time.
@egaga https://github.com/capaj/systemjs-hot-reloader is the project we want to use here.
@egaga There is also the angular2-hot-loader that is currently being developed.
@guybedford use as in make part of jspm or as preferred add-on?
I just made https://github.com/mikz/jspm-dev-server
It is HTTP/2 web server, that correctly sets far future expire cache headers for files in jspm_packages
.
It is very raw, starts only on port 3000, servers only current folder, etc. But the source is trivial. https://github.com/mikz/jspm-dev-server/blob/master/index.js
I also want to add proxy support, to use with other webapps.
It has https://github.com/capaj/chokidar-socket-emitter baked in, so works out of the box with https://github.com/capaj/systemjs-hot-reloader.
Would it be possible to cache the babel/ts/traceur transpilation result in the browser (localStorage?) in order to speed up loading times? I haven't looked into it myself yet, but theoretically if the file was loaded from cache, you should be able to speed the process up significantly by using a precompiled version of the file, right?
IMO the main problem is the browser loading the files sequentially. Every file can require new dependency, so loading more other files etc. Babel transpile is negligible in this situation.
Are you sure about that? Even if I generate a dependency cache, I see 0-3 seconds used for downloading assets, and then a several second gap before anything is executed. I thought babel was the problem there, but I guess I'll have to do some more research
I've seen the same thing as @ineentho, still digging into it.
I've done some experimenting, but I don't have a big project to test on right now so I'm not sure how big of an effect it will have.
I cached the whole translate step into localStorage. If there has been no changes to an ES6 file, Systemjs doesn't even try to load babel, which quite a big save. The save on individual files isn't as big as I thought though, but I'll see tomorrow when I can test on a project at work.
If somebody want to check it out and see if you can improve anything, I created a gist: https://gist.github.com/ineentho/3ccaaec164e418f685d7
localStorage is quite limiting however with a hard limit of 5MB, so I think I will have to check out indexedDB or maybe even serviceworker cache somehow.
Having a project with 800+ files I tried HTTP/2 and changing caching, it doesn't improve anything. I agree with @mizk, main problem seems to be the number of files itself and that browser loads them one by one, and add transpiling as well. I think proper solution here would be using same approach as @jackfranklin does with his jspm-dev-builder, although I tried his module it didn't work, it just doesn't remove file from cache. But I like the general idea. You bundle everything into one file and memory-cache each of them. Once you have a file change - remove it from cache, re-bundle again and reload the browser. It should work perfectly and super-fast. That's the best solution.
@fyodorvi Just curious, what are your start up times like with a production bundle?
According to DevTools in Chrome on a mac:
I have a fairly large app that in development mode makes 388 requests, 4.5m transferred and finished in 6.07s with a full cache flush. With a reload of the page, it is 386 requests, 693kb transferred and finished in 5.35s.
My production bundle is a single file, 770kb transferred and finishes in 2.70s with a cache flush. A reload is 14.7kb and 2.55s. This includes going over the network too (everything is hosted on appengine).
I'm using the same technique I describe above. You can now see how my app has grown since then as well.
=)
@phenomnomnominal it's quite fast, about two seconds. I actually spent few more hours trying to figure out stuff with jspm-dev-builder... And it actually works! Load time is now dramatically short. It took about 25 seconds before to make a change and see it in the browser. Now it's 5 seconds. And if you reload the page - it's same two seconds as on prod (as the browser is getting just bundled version). It feels like Christmas now. Source maps work as well, and BrowserSync. Probably I should make a demo project with it.
@fyodorvi Please do. @jackfranklin @lookfirst I'm curious how you enable hot reloading in a workflow which reloads all javascript? I'm curious whether the jspm-dev-builder could enable a quicker full reload, but then have systemjs-hot-reloader take over for smaller changes, enabling hot reloading.
I use angularjs so hot reloading is kind of a broken concept. =( There are people coming up with workarounds, but they are all a bit too hacky for my taste. At this point, my 5s reloads aren't that big of a deal.
@ineentho your idea of a caching translate hook seems to work really well. There may be more sophisticated solutions that will ultimately produce a better result, but this alone shaves a couple seconds off of a 4 second load for me. localStorage
is indeed pretty limiting, so I took a shot at using IndexedDB here.
@nlwillia That looks like a very interesting approach. And because you use Dexie there are fallbacks to web sql and localStorage. Any change you could release this as package? Using it should be a manner of importing it before any other module right?
I'd have to look at how loader plugins are structured for release, but yeah, probably.
For now, it's just:
<script src="jspm_packages/system.src.js"></script>
<script src="//npmcdn.com/dexie@1.3.3/dist/dexie.min.js"></script>
<script src="plugin-translate-cache.js"></script>
<script src="jspm.browser.js"></script>
<script src="jspm.config.js"></script>
<script>
System.import(...);
</script>
Have you tried adding plugin-translate-cache.js
(which contains your gist i'm assuming) as the first import in the file you're importing with System.import()? This would require using some kind of import statement (cjs / esm) to import dexie.
After doing couple of days of testing and experiments, I'm absolutely sure that there's no other reliable way of improving load speed that using build cache (jspm-dev-builder approach).
Here's a bit of statistics from my current project.
Before using the approach it was ~800 dependencies loading for the project:
I happen to have quite powerful machine, but still it takes about 15 seconds to start up the project. Each time you reload the page browser serves you almost 800 files. For other developers on the project average time is 30 seconds. Some people running slow machines got 60 to 90 seconds (!).
After applying the approach load time is production-like, and it should stay marginally the same even when project grows further:
Obviously, serving just one already transpiled file is way faster that loading 800 dependencies. But how about serving changes and rebundling? Well, that's possible thanks to Builder.invalidate method. Application has one entry point with all imports (it is Angular app). We are using gulp at the project, which has persistently running "watch" task. So gulpfile workflow is:
var builder = new require('jspm').Builder();
builder.bundle(input, output, options);
This may take a while (almost same time as loading the app without that approach in browser), takes ~20 seconds on my machine.
Next, when file watcher sees file change we trigger builder.invalidate it from cache:
devBuilder.invalidate(moduleName)
And then trigger builder.bundle
again, which will happen in a second or two, because only 1 file needs to build, other are in cache already.
Lastly, we trigger browserSync.reload()
.
So using that approach I wait 4 to 5 seconds before I see the app reloaded in the browser. And while using the app if it's crashed (e.g. error or I just need to reload), it takes ~3 seconds to reload. Where's previously in was 15 seconds just to load the app in any case (not counting gulp tasks which applied as well).
I'll try to find some time to implement and publish proper boilerplate project for that approach. But you may do it yourself in the meantime.
I also apply this approach for unit tests (we're using karma+jasmine, but it's doesn't really matter).
@fyodorvi WOW, very interesting idea. I'll consider trying that soon.
@nlwillia that's a fantastic solution. I've used it in our project with some tweaks, to avoid loading as well as transpilation. Here's the modified version: https://gist.github.com/rubyboy/1722db5339ce546078e5
When using this approach it avoids loading all dependencies in the jspm_packages folder and only transpiles when needed.
Thanks for the tip!
@nlwillia @rubyboy Wow, what a speed boost! Please do release this as a plugin, it will do wonders for using jspm in development.
Which version of jspm are you using? I've found both the script I posted earlier and the forks unreliable under JSPM .16.
@nlwillia @rubyboy what a nice solution, indeed! Best choice if you don't want to spend time configuring build process and invalidating cache. I get ~4 seconds loading time, still doesn't beat ~2.5 seconds bundled, but it's damn close. I wouldn't implement my solution if tried this before, it's good enough.
Hello,
Thanks a lot for all the great work on jspm :thumbsup:
During development, when importing libraries split across many files such as React or Angular2, es6-module-loader loads all the required files individually which takes around 2-4 seconds. It helps to import a provided build instead, but not all packages include one.
Is there a suggested way around it?