yuchi / frontend-ng-loader-workspace

Liferay Workspace for experiments in bringing SystemJS and packages support to Liferay Portal
5 stars 2 forks source link

Preemptive module dependencies graph #8

Open yuchi opened 7 years ago

yuchi commented 7 years ago

Even if we already had combo-loading we would still need something along the lines of liferay-module-config-generator to have a static graph of module-to-module dependencies in order for requests to be correctly queued.

If this was done at build time then we would have a hard time changing the contract in the loader, since that would require also to rebuild al bundles. Also since we are more or less sharing resources and packages, an older bundle could poison other bundles with a incorrectly configured setup.

If we do it a runtime we would be forced to write it in Java. The main problem is that there are tons of libraries written in JS that do this perfectly. We could wrap them with Nashorn but I did not forget the lesson we learned with JRuby and SCSS. (Could be totally fine actually. JavaScript/Nashorn is way faster and lighter then JRuby AFAIK. Benchmark needed here)

jbalsas commented 7 years ago

What is your perceived downside of the build time approach? Shouldn't you probably need to rebuild and redeploy the changed bundles anyways?

What would be the scenario in which something should change on the dependencies graph without any module being changed?

yuchi commented 7 years ago

Having the configuration process at build time (as we have now) means that

  1. bundles built with older build processes could have bugs or a different (older) contract;
  2. bundles could have been misconfigured.

Also doing it at build time means that it must work on Maven (officially), Gradle (officially), Ant (unofficially) and Gulp (officially, for themes!).

jbalsas commented 7 years ago

As far as I can tell, the current build-time process works across all of our supported build systems (Maven, Gradle and gulp for themes).

I'm still unsure as to how this would work without the build step... when is the graph created/regenerated? When are the urls for the requests generated and what information will they contain? (http://.../?dep1.js/module1.js or just http:/.../?module1.js)

yuchi commented 7 years ago

This is more or less FUD from my side.

The generated config would be similar to the one currently generated but package aware.

Every occurrence of require will be detected for every module in every package, but will be actually resolved on a specific file in a specific dependency package version at runtime.

jbalsas commented 7 years ago

What does FUD stand for exactly in this case? 😂

Could you draw a rough idea of what this runtime resolution looks like? For instance, right now we have somthing like:

require('foo') ➡️ Loader::resolveDependencies ➡️ Loader::buildURL ➡️ Loader::loadScripts

This is all done in runtime inside the loader using the metadata provided in deploy time (generated at build time).

yuchi commented 7 years ago

In blogs-admin-web/src/main/resources/META-INF/resources/js/main.js

const Component = require('metal-component');
const SomeUtil = require('metal-component/lib/some-util');
module.exports = class BlogsAdmin extends Component {};

In blogs-admin-web/package.json

{
  "name": "blogs-admin-web-resources",
  "dependencies": {
    "metal": "^2.0.0"
  }
}

Given the resulting config.json of blogs-admin-web (skipping stuff and notes on #5)

{
  "packages": [
    {
      "name": "blogs-admin-web-resources",
      "version": "1.0.0",
      "dependencies": {
        "metal-component": "^2.0.0"
      },
      "modulesRequires": {
        "js/main.js": [
          "metal-component",
          "metal-component/lib/some-util"
        ]
      }
    },
    {
      "name": "metal-component",
      "version": "2.1.4",
      "...": "..."
    }
  ]
}

If we need to resolve and execute blogs-admin-web-resources@1.0.0/js/main.js we need to

yuchi commented 7 years ago

Notes:

  1. After a quick review probably most of the algorithm should be part of ensureLoaded itself.
  2. This can be done on the server too so the concept of buildURL is useless here, since we know before hand what the server will push to us “HTTP2-style” and we don’t need to specify the whole list of modules, just the entry point (see #9 for a cooler usage) of this.
yuchi commented 7 years ago

By the way…

What does FUD stand for exactly in this case? 😂

…means I’m not so sure about the approach 😢 with a build time process we are splitting the contract between the registry in the portal, the interpreters (at service tracking time), the client side loader, the build phase… The next step would be to add SMC’s coffee machine in the mix.

jbalsas commented 7 years ago

I think I had a question about this and I'm not sure if I ever voiced it. It is about [...] server will push to us “HTTP2-style” [...] which in my mind raises some flags regarding URLs. If I understand it correctly, a url such as http://.../blogs-admin-web…main.js may return something like (simplified):

class Component {};
class BlogsAdmin extends Component {};

This will probably make urls not-idempotent and risk rendering stale cached data...

yuchi commented 7 years ago

Yeah, more or less this is the challenge.

When using AMD you usually point to a single prepackaged module per-package, this is way you didn’t see issues up to now. Have a look at lodash, you now usually access this file. With the new loader you could call lodash/forEach which would ask for other 10 modules.

With explicit modules in the combo URL we will saturate the available space very, very quickly.


In Webpack, or SystemJS, you would code split your app. For example the main logic would be a base bundle, and every main route another additional one each. When the user navigates to a route you would load it lazily, and only modules not loaded by the base bundle would be downloaded.

We can’t do that because we don't have a global entry build-phase to split the graph correctly.

Also we don’t know before hand what modules will be loaded (directly or lazily) by the page. We could do that at runtime (similarly to header-portlet-javascript in liferay-portlet.xml) injecting this info in the request, processing it, splitting the graph accordingly, and producing a bundle with a persistent UUID — and therefore cacheable. Hairy.