faceyspacey / extract-css-chunk

MIT License
7 stars 0 forks source link

How does this compare to kriasoft/isomorphic-style-loader ? #1

Closed sompylasar closed 7 years ago

sompylasar commented 7 years ago

Can this be used to inline the styles which are used on the server-side for even faster initial page rendering?

https://github.com/kriasoft/isomorphic-style-loader

faceyspacey commented 7 years ago

you can get the names of the chunks used from stats, call fs.readFileSync(chunk) and get the css and embed it within <style>{css}</style>. ...What I've done with React Loadable does this automatically for you if you choose the inline css option (as opposed to the stylesheets options).

And yea, it should be slightly faster because it won't be doing extract work in each component's render method to get precise css. And if you want, you can catch certain sets of CSS based on common sections of the site you determine.

sompylasar commented 7 years ago

@faceyspacey Thanks, will take a look.

Though isomorphic-style-loader is not doing any extract work, the object returned from css-loader (with classnames and stylesheet to insert) is attached to a component during component definition. The overhead is in having a higher order component for every styled component which in componentWillMount, componentWillUnmount inserts/removes this attached stylesheet. Thanks to storing the stylesheets in JS, code splitting is out-of-the-box.

https://github.com/kriasoft/isomorphic-style-loader/blob/f2fc85ef973829189478e4a2143024ca23590132/src/withStyles.js#L22

faceyspacey commented 7 years ago

@sompylasar I wonder if that implementation could work without withStyles if the imported CSS files were pre-transformed to used getters on styles.foo, which in the getter function before the style is returned registers the css in a buffer. That could be done without the HoC and without context. I'm doing something similar with React Loadable.

...I guess that idea would only work server side and would need a solution client side as well. One thing I'm experimenting with is generating 2 files for each javascript chunk: one that has javascript that inserts the css, and one that doesn't. That way in server side rendering where extract-css-chunk can serve the css file, we don't waste bytes sending duplicate css within the js. THEN, when a new JS chunk is dynamically loaded from the client it loads the chunk that has the css in it (from style-loader). ..That's what I'm experimenting with now at least.

sompylasar commented 7 years ago

@faceyspacey That could be possible, I haven't explored ways to remove the HoC.

The thing I'm concerned about the two-files approach is that with isomorphic-style-loader I am able to inline only the styles of components that were actually rendered on the server, not necessarily of every component bundled into the initial chunk, so I will still need the CSS for the rest of the initial chunk components in JS, and because the renders may vary, this set of components and their CSS may vary, too, so it depends on what's rendered.

In fact, to me it is more important to minimize the amount of inlined CSS than to minimize the initially downloaded script bundle (the latter can be achieved with more granular code splitting).

sompylasar commented 7 years ago

By the way, getters with side-effects, as well as side-effects within render (where such styles getter will likely trigger), are highly discouraged. The less magic, the better.

faceyspacey commented 7 years ago

Many things are discouraged become the perfect solution if used for just the right thing. Optimum performance for things such as animations often make the perfect case for calculated use of side-effects. In addition, using CSS modules in general is a side-effect--the classNames you end up using weren't passed in as function parameters. I'm less worried about a calculated solution like this--we aren't talking getters of the dirty-checking variety in angular. That said, it's not needed anyway in the code-splitting solution I'm advocating for. It was just an idea to avoid having to change your code to use HoCs.

As far as minimizing css, I've found that if you use code splitting thoroughly, you achieve 80% of what you're looking to achieve in terms of shrinking relevant CSS served. The render path-oriented solutions--as I describe in the readme--may achieve that remaining 20%, but you're basically nit-picking at that point. And from my perspective, the costs in rendering performance + developer experience (i.e. having to use additional HoCs) outweighs that 20%. For the web apps I've built using React--as compared to the ones I've built using React Native--have all struggled to nail animations and required lots of time optimizing to get animations anywhere as close to as smooth as in RN. The bigger the web app gets, you're dropping frames without a doubt. So that's where I come from in my apprehension to use anything extra that affects render paths.

Now that said, until extract-css-chunk + its counter-parts I'm working on for React Loadable are widely used, nobody even has had the capability to fully use code-splitting (and I'll get to that in a second). So that means the example I'm talking about where code-splitting + extract-css-chunk can handle the 80% in optimization of css served has not even had the opportunity to publicly compare to the utility of css tools that serve precise css served in the render path. The problem has been that code-splitting is great, but has yet to work in conjunction with server side rendering--at least not in a popular abstraction/package. So that means you can code-split all you want, but those split chunks won't server side render, and in addition a 2nd request after page load is required to render the asynchronously loaded chunk, which means additional latency on first load. React Loadable came out swinging to solve this problem, but--as admitted by its author--was still missing the final parts, which turns out to be a very complicated problem once you want CSS chunks, and HMR for both the js and css chunks, and not to mention that it must work for a webpack-compiled + babel-compiled server. By the way, I have worked into React Loadable an option to extract raw css for embedding directly in the page as well, not just stylesheets. Of course, it's just the css read from the corresponding css file chunks.

...Basically, as much as webpack has hyped the opportunity of code-splitting, proper code splitting for both js + css in conjunction with server-side rendering hasn't seen the light of day yet. If and when it all works out, I'm not very concerned about what extra optimization in CSS served that tools operating during render can provide. 80% of the work, or some close-enough percentage, will likely be attained by wide-spread + calculated code-splitting throughout your app. In terms of css, the biggest gains are through serving the css just for the current section of your app.

faceyspacey commented 7 years ago

basically give what I got cooking a chance ;) it hasn't been born yet. You'll have to see this package come together with React Loadable for it all to make sense. I anticipate a future with a lot more code-splitting than we have seen. Once it becomes frictionless (across the larger number of concerns than initially prioritized by code-splitting), it will become a powerful and simple tool to steer your ship in terms of what bytes you send over the wire.

sompylasar commented 7 years ago

The problem has been that code-splitting is great, but has yet to work in conjunction with server side rendering--at least not in a popular abstraction/package. So that means you can code-split all you want, but those split chunks won't server side render, and in addition a 2nd request after page load is required to render the asynchronously loaded chunk, which means additional latency on first load.

If you have a webpack-built server, it won't be code-splitted (via chunk limit), will be a single file that is parsed at the Node process start once. I've recently achieved that in a proprietary code base (~90K LOC).

The thing I'm struggling with on the server-side is not code-splitting but data-loading (my components define their data dependencies via dispatching actions that have asynchronous side-effects that are tracked in redux-saga). I have to re-render multiple times into string and drop the previous render results (keeping the redux store) while the rendered parts could be reused as they do in the browser-side VDOM.

But this does not add to the transferred bytes, only to the wait time to first byte.

sompylasar commented 7 years ago

Re: animations and re-render performance, React Fiber should help.

faceyspacey commented 7 years ago

If I understand you correctly, you have data dependencies at the component level (like Apollo), which requires multiple renders. Until Apollo came around, I did all I could to avoid multiple renders by never removing the data dependencies from components into async actions. And then before rendering my app on the server, I use the URL to determine what async actions to dispatch. Then I do await Promise.all([ store.dispatch(myThunkA), store.dispatch(myThunkB) ]), and then I render the app in one go.

ps. I checked out the Apollo code the other day, and I noticed they don't waste cycles rendering more than they need. They don't really render. Rather they write custom code to traverse your render tree (using unofficial React APIs), extracting dependencies, but not actually rendering anything. ...In short, the multi-render thing is something we should be very wary of on the server as it will eat up lots of cycles.

faceyspacey commented 7 years ago

as far as the chunk limit thing--if I understand you correctly, you're referring to having to limit your webpack server code to being one chunk. That is definitely a requirement, but that only solves one small part. The next step from there is serving to the client only the used chunks in terms of the client bundle. So the way React Loadable is being made to work is to register which client chunks were used (webpack module id, or path if not using webpack on the server, which we then must translate into the client module ID), and then serve just those--not the single bundle that was generated for the server code.

So you have challenges here--if ur using webpack for both the client and server--where you have one bundle for the server and multiple chunks for the client. You need to determine which chunks were used by module ID, which isn't deterministic. To solve that you need to use HashedModuleIdsPlugin or NamedModulesPlugin so both bundles have the same IDs. That's still just the beginning of this endeavor and there's quite a bit more--but you get the idea: you can split your app into 10 chunks, and serve just main.js + 0.js or main.js + 3.js on the initial request, thereby saving lots of bytes.

Extract-css-chunk attempts to address the CSS aspect of all of this. The end result will be using <ReactLoadable/> all throughout your app, as seamlessly as you use any component.

sompylasar commented 7 years ago

And then before rendering my app on the server, I use the URL to determine

  1. Hardcoding URLs would be a mess to maintain, I wanted to avoid this approach.
  2. Async effects may go in a chain resulting in new components to be rendered which require more data and so on.

The next step from there is serving to the client only the used chunks in terms of the client bundle.

Yes, got your point. You want to inline references to chunks that will be required before they are required on the client-side.

sompylasar commented 7 years ago

To solve that you need to use HashedModuleIdsPlugin or NamedModulesPlugin so both bundles have the same IDs.

This is a hard requirement for long-term caching, so definitely I use that (the former in production build, the latter in development).

faceyspacey commented 7 years ago

cool

...by the way Apollo, recursively handles this:

Async effects may go in a chain resulting in new components to be rendered which require more data and so on

I'm not a fan, but I guess it depends on the requirements of your app. As far as hard-coding the URLs, perhaps that also isn't a publicly solved problem, but I'm using the following I made which I plan to release soon:

https://github.com/faceyspacey/pure-redux-router

Checkout how I do SSR: https://github.com/faceyspacey/pure-redux-router/blob/master/docs/server-rendering.md

...basically you have a routesMap defined once both for client and server, and you specify your data requirements per route once. Then there's an idiomatic way on the server to insure you get the data into redux before rendering. I've put a very long time into pure-redux-router. I think you might like it. Very few have seen it, but I plan to release it after the React Loadable stuff I'm working on. It stops short of the recursive thing (point #2 you made). But I just haven't needed that in any of my apps yet. I do wonder if there truly is a need for this design??? Or at least how common that design is, as opposed to only specifying route-specific data dependencies??

faceyspacey commented 7 years ago

...with the thunks in pure-redux-router, you can make multiple requests for data in your promise chain. So if you can determine additional data you're going to need from a 1st request, you can get it in a second request. You'd duplicate some logic in your components, but probably not much. And since you're specifying it in the same routes used used by redux on the client and server, it's very DRY.

sompylasar commented 7 years ago

We're slightly off topic here, though here's what I wrote and use for "recursive" (in fact loop) rendering: https://gist.github.com/sompylasar/5e7157e451f4b7268def9ae1ce01edd4

sompylasar commented 7 years ago

I'll look into pure-redux-router later. I'm still not convinced that routing has to be coupled with component data requirements, that breaks the isolated components approach so will eventually get to the point of spahetti.

For most of the apps, I think, hacky ways will overrule future-proof designs just because they are easier to understand by general audience of web developers.

faceyspacey commented 7 years ago

of course, im just not sure what "hacky" ways you're referring to.

Lots of Redux apps have made their goal to remove data loading from the component mounting lifecycle. If you have a centralized abstraction like pure-redux-router I see no hacks. The overall benefit is your components--without asynchronously loading data in componentWill/DidMount--become a lot easier to test as they become pure synchronous functions. And data "dependencies" are not lost, as they are specified via Redux's connect HoC. Just the actual retrieval is moved elsewhere--again, hopefully to a centralized solution like pure-redux-router rather than ad hoc solutions.

sompylasar commented 7 years ago

By hacky ways I mean hardcoding component data dependencies next to the route map just to "simplify" (easify) and not pull that from the otherwise self-contained components because it's harder to implement.

As a user of https://github.com/reactjs/react-router-redux I definitely like pure-redux-router's approach. I even tried using/making something similar way before Redux and React became mainstream, with https://github.com/AlexGalays/abyssa-js (it has notion of States between which the app transitions, entering and exiting), but I had so many troubles because of the need for the two-way binding with the browser History API which unfortunately doesn't expose its entire internal state and does not report events corresponding to what happened, i.e. the dispatched actions (e.g. back button pressed, url entered). And you cannot prevent the internal state from changing, can't stick a reducer (or a saga) into browser that would reject (or won't dispatch) a URL change if needed by the current app state.

I'm glad that/if you found out how to work that around (probably it's the history module that does the heavy lifting, back then there were just thin wrappers around the browser API).