Closed meditans closed 8 years ago
@meditans can you wrap this up with the following stats:
Sure,
3.7Mb
2.3Mb
1.44Mb
491kb
I will try to understand why the size of the closured file are different in my hd vs the ones reported by firefox.
I had to disable the closure compiler ADVANCED_OPTIMIZATIONS
, because I encountered an error detailed in https://github.com/ghcjs/ghcjs/issues/543; if that error is solved, we could take the size of the gzipped js to about 300kb
.
Note also that the final gzipped bundle contains all the haskell runtime, which is the heavy part of what's transferred, so the bundle size shouldn't go up too much as we add more feature to the application.
Note also that the final gzipped bundle contains all the haskell runtime, which is the heavy part of what's transferred, so the bundle size shouldn't go up too much as we add more feature to the application.
Is it possible to split the runtime out into a different file so that it can be fetched by the browser only once, and cached? Also, is this file same for every ReflexFRP project? Or do the runtime components included in the JS file depend on what kind of Haskell language features the project is using?
So, breaking down this further, it seems that I wasn't considering the role of libraries! ghcjs
generates these files:
231K lib.js
: glue functions, unrolled implementation of MD5 etc2.9M out.js
: a bundle formed by our code and all the compiled libraries553K rts.js
: this is the runtime system, per se31 runmain.js
: a simple invocation to call the programSo it seems that the large part doesn't come from the runtime system, but from the libraries; the libraries bundle isn't cached. It is probably possible to minify these files indipendently (minifying all.js
, which is a bundle of all these files, makes easier to do global renaming, but in principle it should be possible to coordinate the renaming across files).
However, I think the most used form of caching in practice, is incremental linking: this let you load the bulk of the code once, and have each subsequent page load only his specific code; we've not reached a complex enough app for which this approach could make sense, however. Variations of this approach are probably more useful to consider when we will try to minify the entire app.
However, I think the most used form of caching in practice, is incremental linking: this let you load the bulk of the code once, and have each subsequent page load only his specific code; we've not reached a complex enough app for which this approach could make sense, however. Variations of this approach are probably more useful to consider when we will try to minify the entire app.
Would this be following UI design goal:
Progressive loading of JS files to reduce initial page-load time
Well, it probably would slightly reduce the initial page-load time, but I don't think that's the focus of the technique; it would surely make the subsequent interactions with other pages mostly-without loading, though.
I think that the Progressive loading
could be examined further. Just to understand our objective here, we're at the ~500Kb
mark for the application here: what would be a good target number to obtain for the first-loading js?
Basically, we have the following ideas:
Ok perfect, so the next thing I'll try in this area is minify each js generated by ghcjs
on its own, and see if this leads to a loss of compression. This could lead to the caching of rts.js
.
Then, as the application grows, we'll try to use incremental linking to satisfy the second point, the on-demand loading of js
code.
Minification is done in three steps: 1) Adding the -dedupe and -DGHCJS_BROWSER options to the cabal file; 2) Invoking the closure compiler on the generated js; 3) Changing the definition of the server so that the middleware gzips the files, without need for external tools;