Open yuchi opened 7 years ago
What is your perceived downside of the build time approach? Shouldn't you probably need to rebuild and redeploy the changed bundles anyways?
What would be the scenario in which something should change on the dependencies graph without any module being changed?
Having the configuration process at build time (as we have now) means that
Also doing it at build time means that it must work on Maven (officially), Gradle (officially), Ant (unofficially) and Gulp (officially, for themes!).
As far as I can tell, the current build-time process works across all of our supported build systems (Maven, Gradle and gulp for themes).
I'm still unsure as to how this would work without the build step... when is the graph created/regenerated? When are the urls for the requests generated and what information will they contain? (http://.../?dep1.js/module1.js
or just http:/.../?module1.js
)
This is more or less FUD from my side.
The generated config would be similar to the one currently generated but package aware.
Every occurrence of require will be detected for every module in every package, but will be actually resolved on a specific file in a specific dependency package version at runtime.
What does FUD
stand for exactly in this case? đ
Could you draw a rough idea of what this runtime resolution looks like? For instance, right now we have somthing like:
require('foo')
âĄď¸ Loader::resolveDependencies
âĄď¸ Loader::buildURL
âĄď¸ Loader::loadScripts
This is all done in runtime inside the loader using the metadata provided in deploy time (generated at build time).
In blogs-admin-web/src/main/resources/META-INF/resources/js/main.js
const Component = require('metal-component');
const SomeUtil = require('metal-component/lib/some-util');
module.exports = class BlogsAdmin extends Component {};
In blogs-admin-web/package.json
{
"name": "blogs-admin-web-resources",
"dependencies": {
"metal": "^2.0.0"
}
}
Given the resulting config.json of blogs-admin-web (skipping stuff and notes on #5)
{
"packages": [
{
"name": "blogs-admin-web-resources",
"version": "1.0.0",
"dependencies": {
"metal-component": "^2.0.0"
},
"modulesRequires": {
"js/main.js": [
"metal-component",
"metal-component/lib/some-util"
]
}
},
{
"name": "metal-component",
"version": "2.1.4",
"...": "..."
}
]
}
If we need to resolve and execute blogs-admin-web-resources@1.0.0/js/main.js
we need to
ensureLoaded([ "blogs-admin-webâŚmain.js" ])
to queue the load of the module if requiredfindModuleRequires(package, module)
that returns metal-component
, metal-component/lib/some-util
resolveModuleRequire(package, module, request)
which
./
or ../
package
as a relative path from module
resolvePackage(package, request)
to resolve the right request
ed package version based on the package
âs dependenciesensureLoaded(modules)
to load the fully resolved modules if required, this
ensureLoaded([ "blogs-admin-webâŚmain.js" ])
ensureExecuted([ "blogs-admin-webâŚmain.js" ])
which (if the module hasnât been executed already) does
blogs-admin-web-resources@1.0.0/js/main.js
withrequire
which can execute modules by using the unresolved form and using ensureExecuted
module
object and module.exports
object__dirname
and __filename
module.exports
in the execution cachemodule.exports
Notes:
ensureLoaded
itself.buildURL
is useless here, since we know before hand what the server will push to us âHTTP2-styleâ and we donât need to specify the whole list of modules, just the entry point (see #9 for a cooler usage) of this.By the wayâŚ
What does FUD stand for exactly in this case? đ
âŚmeans Iâm not so sure about the approach đ˘ with a build time process we are splitting the contract between the registry in the portal, the interpreters (at service tracking time), the client side loader, the build phase⌠The next step would be to add SMCâs coffee machine in the mix.
I think I had a question about this and I'm not sure if I ever voiced it. It is about [...] server will push to us âHTTP2-styleâ [...]
which in my mind raises some flags regarding URLs. If I understand it correctly, a url such as http://.../blogs-admin-webâŚmain.js
may return something like (simplified):
class Component {};
class BlogsAdmin extends Component {};
This will probably make urls not-idempotent and risk rendering stale cached data...
Yeah, more or less this is the challenge.
When using AMD you usually point to a single prepackaged module per-package, this is way you didnât see issues up to now. Have a look at lodash, you now usually access this file. With the new loader you could call lodash/forEach
which would ask for other 10 modules.
With explicit modules in the combo URL we will saturate the available space very, very quickly.
In Webpack, or SystemJS, you would code split your app. For example the main logic would be a base bundle, and every main route another additional one each. When the user navigates to a route you would load it lazily, and only modules not loaded by the base bundle would be downloaded.
We canât do that because we don't have a global entry build-phase to split the graph correctly.
Also we donât know before hand what modules will be loaded (directly or lazily) by the page. We could do that at runtime (similarly to header-portlet-javascript
in liferay-portlet.xml) injecting this info in the request, processing it, splitting the graph accordingly, and producing a bundle with a persistent UUID â and therefore cacheable. Hairy.
Even if we already had combo-loading we would still need something along the lines of
liferay-module-config-generator
to have a static graph of module-to-module dependencies in order for requests to be correctly queued.If this was done at build time then we would have a hard time changing the contract in the loader, since that would require also to rebuild al bundles. Also since we are more or less sharing resources and packages, an older bundle could poison other bundles with a incorrectly configured setup.
If we do it a runtime we would be forced to write it in Java. The main problem is that there are tons of libraries written in JS that do this perfectly. We could wrap them with Nashorn but I did not forget the lesson we learned with JRuby and SCSS. (Could be totally fine actually. JavaScript/Nashorn is way faster and lighter then JRuby AFAIK. Benchmark needed here)