ember-fastboot / ember-cli-fastboot

Server-side rendering for Ember.js apps
http://ember-fastboot.com/
MIT License
851 stars 160 forks source link

Create Production App Server #58

Closed tomdale closed 8 years ago

tomdale commented 9 years ago

Right now, serving an Ember app via FastBoot requires running ember fastboot from within an Ember CLI app. This means you have to install Ember CLI and its many, many dependencies to a production server.

Instead, we should create a slimmed down server that doesn't have an Ember CLI dependency but just the bare minimum to load and run the compiled Ember file and serve HTTP requests. The FastBoot Ember CLI addon can then just require that server as a dependency and start it up when you run ember fastboot.

stefanpenner commented 9 years ago

@tomdale this is something that would basically serve the dist of ember-cli correct?

tomdale commented 9 years ago

@stefanpenner fastboot-dist now but yeah

stefanpenner commented 9 years ago

@stefanpenner fastboot-dist now but yeah

thanks makes sense.

listepo commented 9 years ago

@tomdale & @stefanpenner fastboot-dist :+1:

irae commented 8 years ago

It's been a while since I use Ember, and I am refreshing my knowledge and understanding the current state (last time I used was 1.0 RC). That said, I've been doing a large production application with React and with server side rendering enabled. So I would like to contribute a bit of knowledge here.

One interesting aspect of server-side rendering is that it is non-essential, provided you are sending HTML from the server that your Single Page Application (SPA) is abe to recover from, and re-hidrating the browser created DOM nodes into your instances. This means, if you have any production issue (CPU performance, memory use, etc), you can have flags to disable manually or routines that detect issues and disable the server-side rendering temporarily.

In order to achieve that, there is two ways you can separate the the rendering process from the essential HTML you application needs to serve regardless (<head> and <body> and minimal Javascript and CSS tags).

  1. Have two server process (either two NodeJS process or one NodeJS process and another in the language of your choice)
  2. Have a single process and "fence" your code with async blocks and try-catches to sandbox the rendering part from the rest of the server.

In the first approach, the problem is serialization and de-serialization of the data. For instance, your data is probably being serialized as JSON to serve to your SPA and you need to server-render the HTML with the exact same JSON you will eventually send to the client. If you have a single process, you can just pass along a POJO to the rendering function before serialization is done to send to the client, but using multi-process will have performance problems, which you might tecle with Trift, Protocol Buffer, UBJSon, etc.

The other approach will face the issue that you are on a single process and measuring efficiency and creating timeout rules to stop rendering with server load, or even implementing caching might be more challenging in this scenario. The reason the second approach is hard to manage is counter-intuitive but I faced this problem in production: JIT compilation might take a while to warm up. Once we deploy UI code to the server and until all code paths are executed multiple times, V8 will not JIT compile all the code and we experienced huge differences in execution time. During this implicit warmup we saw 8 seconds rendering of pages that render in 20ms after all warmup is done. Of course this warmup needs to take place in option 1 too, but a timeout over HTTP or other communication protocols is trivial, as opposed to not that easy to implement in a single NodeJS thread.

I've first maintained a multi-process application but right now the application we have in production uses the second method and every time we deploy the application, no user will get server-side rendering until we start getting rendering performances below our acceptable threshold (meaning code is now JIT compiled) and eventually everyone is getting the server-side rendered pages.

All that is to say that a simple "Fastboot production server" might be a good first step, but is probably way too simplistic for all production scenarios. I would suggest also providing an easy programatic way of requiring the fast boot as a module and also providing ways of have popular application server, such as Express, to wrap FastBoot. Ideally, in a future scenario, having more features like handing timeout, setting up thresholds, hooks to measure and log performance and other quality of life features might be nice additions to the project.

tomdale commented 8 years ago

Done here: https://github.com/ember-fastboot/ember-fastboot-server

tomdale commented 8 years ago

@irae Thanks for the comments. I agree that being able to "fail back" to client-side rendering is a huge advantage of this architecture. The FastBoot server is just an Express middleware; I'd love to see an open source implementation that uses it that does what you describe.

sebyx07 commented 8 years ago

@tomdale pretty nice, now we can add cache headers for nginx