tbranyen / amd-next

If we were to have a v2 of the AMD specification what would it look like?
1 stars 0 forks source link

General discussion #1

Open tbranyen opened 9 years ago

tbranyen commented 9 years ago

Let's say someone was working on improving AMD including a new loader and optimizer. What would you hope was different?

As developers interested in AMD, we need to break down the current status-quo of how we think and use this specification. It's a powerful concept that has unfortunately aged very poorly. Think, do we really need this many different ways to define and require a module: https://gist.github.com/tbranyen/8480792?

Can you help me make a wishlist of possible changes, lets not see AMD die because we didn't care enough to try and improve it.

My wishlist:

Note: there are specs and implementations, they are tightly coupled so I want feedback on both

require.config({
  paths: { html: 'lodash-template-loader' }
});

require('./some/template.html');

// Instead of
require('html!./some/template');
treasonx commented 9 years ago

Requiring a module that hasn't been loaded won't throw an error, it returns undefined and loads under > the hood, a subsequent call will return the cached module

This will result in unexpected behavior later when the module is used. It might be difficult to track it back to the fact that the module wasnt loaded. The error we get now isnt very helpful in requirejs but it blows up early!

No more packages, map, contexts, exclude, include, etc. options

What about compatibility with modules that were authored with packages in mind? Excludes and includes are powerful optimization knobs.

No more global local requires, ex: require('a') vs require('./a')

Local requires are useful when authoring packages.

Wish List:

I would love a loader which can figure out dependencies based on package.json or bower.json RaveJS was a good step in that direction. I hate having to configure paths and shims for non-AMD dependencies.

Things I cannot live without:

RequireJS has a complicated build config. But it allows me to package my applications according to real world usage. For example I have a script which will inspect the page and dynamically load other bundles at runtime. This can be complicated to configure but its a very powerful production optimization!

tbranyen commented 9 years ago

Hey @treasonx thanks for commenting! Gonna address in order:

Since this rejects a promise (or throws an error), if it happens in one of your modules, it will not execute with undefined, it simply won't execute at all.

As for real-world usage, that's exactly where I'm coming from as well, and I want to ensure that this doesn't ruin the flexibility.

tbranyen commented 9 years ago

Oh right, I want to add that I'm not going to remove the include/exclude features. I want to wrap them into the paths configuration. If you want to exclude a module in a build, you'd simply set the path to null.

treasonx commented 9 years ago

Crazy example of behavior with undefined return for unloaded module.

define(function (require) {
  var awesomeSauce = require('awesome');

  return function () {
      console.log('This is gonna be great!');
      setTimeout(function () {
        awesomeSauce.beGreat();
      }, 5000);
  };
})

I am trying to illustrate that the dev might not use the module until much later in the applications life. This would result in an error. An experienced dev will be able to quickly figure this out, but a novice will lose a lot of time trying to figure out how their module became undefined! Maybe at least a warning in some form of debug mode would help.

tbranyen commented 9 years ago

Right, but since that's simplified commonjs all dependencies scanned via require must be successfully resolved before the function body will execute.

csnover commented 9 years ago

Can you help me make a wishlist of possible changes, lets not see AMD die because we didn't care enough to try and improve it.

AMD conflates module format and loader, so it’s important to clarify which parts are important and which parts go away. I personally have no qualms about AMD dying to be replaced by a high-quality standardised module format and/or loader that covers all known use cases.

No more packages, map, contexts, exclude, include, etc. options

Hopefully only paths will need to be implemented

IIRC contexts are some RequireJS-specific thing, and exclude/include are part of the build system, which are mandatory for shaping built layers. paths is not enough API to do everything that needs to be done.

paths is a physical module ID to path mapping. map is a logical module ID to module ID mapping, and include and exclude operate on the module ID level as well. After you have done a build, paths is never used, but map can continue to be used to shim/replace modules at runtime.

Requiring a module that hasn't been loaded won't throw an error, it returns undefined and loads under the hood, a subsequent call will return the cached module

Super useful for browser dev tools

As you’ve described it, this is a bad API design as @treasonx already points out. Silent failure is the worst operation mode and should never be included. If this were done then this would also need to introduce a new different failure signaling mechanism. Like, say, Promises.

Support loading index.js from folder paths src/ would load src/index.js

This is what packages is for.

No more global local requires, ex: require('a') vs require('./a') Exactly, I want to make it identical to Node's system. Right now if you require('a') in an AMD loader, it has to resolve that as ./a which makes no sense in my mind. That's a global lookup, not a local one. Lets enforce the Node rules.

I don’t understand this. require('a') goes to look up the location of a in packages and then falls back to baseUrl if there is no registered package.

All unresolved global modules must end up at a resolver that is sophisticated enough to allow node or bower lookups

This seems like a good enhancement, and one that I believe exists in an incomplete form in the WHATWG loader.

You can require a module under more than one name

I don’t understand what this means. Are you describing map?

Limit anonymous define syntax to AMD-style: define(Array, fn) or SCJS-style: define(fn) no ambiguity in between

Are you talking about removing define(value)?

Named modules are allowed, but are discarded if loaded from a path

This would cause an inconsistency when switching between unbuilt and built code. I don’t think named modules should be a thing at all, and would instead say that AMD loaders need to have a standard for introducing modules to a loader cache. I think the WHATWG loader has some API for doing this but I am not sure which one it is. In Dojo loader and Node.js loader this involves adding keys to require.cache.

Change plugin syntax to be extension based:

Plugins are not just used for loading other files! Look at dojo/has (or, heck, your own use.js). Plugins do need to be changed to provide failure signaling though.

We are working on new loader proposal right now for Dojo 2 and were going to bring this up for public feedback soon. Some of the contributors have been already talking to other platform-agnostic loader vendors (like James Burke) so would be happy to subsume wishlist under that initiative and help bring it to completion. That said, it’s not clear what is going to happen with the WHATWG loader at this point but I think that it doesn’t make sense to start an “AMD 2” without knowing what is going to happen upstream.

tbranyen commented 9 years ago

AMD conflates module format and loader, so it’s important to clarify which parts are important and which parts go away. I personally have no qualms about AMD dying to be replaced by a high-quality standardised module format and/or loader that covers all known use cases.

Awesome, I like to think of AMD v2 as a shift in that direction. I'm not saying this is going to be that spec, but I hope it can at least spark some discussion.

IIRC contexts are some RequireJS-specific thing, and exclude/include are part of the build system, which are mandatory for shaping built layers. paths is not enough API to do everything that needs to be done.

Right, as you have probably guessed, I meant...

Note: there are specs and implementations, they are tightly coupled so I want feedback on both

...very literally. I'm intentionally conflating to get use cases and implementations on the table. Currently no loaders operate in harmony and that's really sad. I've authored numerous plugins that support: Require, Curl, and Dojo and they have all been super complicated and inconsistent.

paths is a physical module ID to path mapping. map is a logical module ID to module ID mapping, and include and exclude operate on the module ID level as well. After you have done a build, paths is never used, but map can continue to be used to shim/replace modules at runtime.

Maybe you and a handful of others don't find this confusing, but everyone else does.

As you’ve described it, this is a bad API design as @treasonx already points out. Silent failure is the worst operation mode and should never be included. If this were done then this would also need to introduce a new different failure signaling mechanism. Like, say, Promises.

I'm thinking Promises or throwing, so errors are loud and clear. A rejection will fail the dependency requirement for a module and not execute the callback. You can see an example of the implementation I'm thinking of in the above screenshot to @treasonx.

This is what packages is for.

require.config({
  packages: [
    { name: 'my-component', main: 'index.js', location: '../components/my-component' },
    { name: 'their-component', main: 'index.js', location: '../components/their-component' },
    { name: 'our-component', main: 'index.js', location: '../components/our-component' },
    { name: 'your-component', main: 'index.js', location: '../components/your-component' }
  ]
});

require('my-component');
require('their-component');
require('our-component');
require('your-component');

Compared to:

require('../components/my-component/');
require('../components/their-component/');
require('../components/our-component/');
require('../components/your-component/');

This would cause an inconsistency when switching between unbuilt and built code. I don’t think named modules should be a thing at all, and would instead say that AMD loaders need to have a standard for introducing modules to a loader cache. I think the WHATWG loader has some API for doing this but I am not sure which one it is. In Dojo loader and Node.js loader this involves adding keys to require.cache.

Ha, I wasn't brave enough to volunteer discarding them completely, even though I agree completely.

Plugins are not just used for loading other files! Look at dojo/has (or, heck, your own use.js). Plugins do need to be changed to provide failure signaling though.

Plugins with any kind of configuration in the identifier should be abolished imo. They are awful to look at and configure. Use the shared loader configuration that's what it's there for! Use.js is still totally usable with this approach.

require.config({
  paths: { shim: { 'use-js' } }
});

require.load('some-module.shim');

It's the same semantics as !, the extension doesn't really mean anything. It's just swapping where the plugin identifier is.

csnover commented 9 years ago

paths is a physical module ID to path mapping. map is a logical module ID to module ID mapping, and include and exclude operate on the module ID level as well. After you have done a build, paths is never used, but map can continue to be used to shim/replace modules at runtime. Maybe you and a handful of others don't find this confusing, but everyone else does.

I’d like to try to avoid hyperbole during this discussion (“handful of others”, “everyone else”) to keep it focused on facts for the time being.

If people don’t understand map, then better education is the answer, not discarding a critical feature because it’s misunderstood. Several of our customers use this functionality to layer new functionality onto existing modules, to alias modules at runtime, and to introduce hot fixes, even in the presence of built layers. It’s not something you can do with a purely physical filesystem mapping like paths.

If one can understand that a URL is not the same as a file path, one can also understand that a module ID is not the same as a file path. To make an nginx analogy, paths is to try_files as map is to rewrite. It would be hard to argue nginx could work well without both.

Compared to:

require('../components/my-component/');

I think there are two unresolvable problems with this.

  1. Node.js allows you to change index.js to whatever you want by setting the main key of package.json so you can’t just naïvely assume to load index.js. IIRC Bower also has this main key. There are quite a lot of libraries out there that put themselves at foo/foo.js instead of foo/index.js (off the top of my head, jQuery, FormatJS, Esprima, and Intl.js all do this). The main library of r.js itself is placed somewhere else. In the absence of strong convention we need to fall back on configuration.
  2. Using relative mids to refer modules that are coming from other packages improperly conflates physical paths with module IDs again. Any time you’re doing something where you are loading another package you need to do it with an absolute identifier in order to be compatible with different filesystem layouts (for example the node_modules pathing). The only filesystem mapping you can really rely on is the one that is inside your own package.

Plugins with any kind of configuration in the identifier should be abolished imo. They are awful to look at and configure. Use the shared loader configuration that's what it's there for!

If you have a module that you want to distribute that is usable across multiple platforms, and in the browser case it has a dependency on a different module that assumes the DOM API is available (say it accesses document from its factory), what is your proposed solution? Right now define([ 'dojo/has!host-browser?domModule' ]) works great.

Thanks,

tbranyen commented 9 years ago

If people don’t understand map, then better education is the answer, not discarding a critical feature because it’s misunderstood. Several of our customers use this functionality to layer new functionality onto existing modules, to alias modules at runtime, and to introduce hot fixes, even in the presence of built layers. It’s not something you can do with a purely physical filesystem mapping like paths.

My point was that you were trivializing complexity. I want to limit the amount of abstract concepts that surround a module being loaded. Thinking only about local and global paths, absolute module names that get loaded via a registered resolver, and plugins. I do think map is important, and I'm easily swayed in that regard, since I've used it quite a bit for DI during testing.

Answering the unresolvable problems:

  1. The lookup for index.js is not respective of package.json or bower.json. I figured simple was better here. It always looks for index.js if you point to a directory. If you want to load a node module it has to go through the node resolver.
  2. Sorry for not actually showing what I would expect the folder to look like:
components/
  my-component/
     template.html
     index.js
// index.js
define(function(require, exports, module) {
  'use strict';

  exports.template = require('./template.html');
});

If you have a module that you want to distribute that is usable across multiple platforms, and in the browser case it has a dependency on a different module that assumes the DOM API is available (say it accesses document from its factory), what is your proposed solution? Right now define([ 'dojo/has!host-browser?domModule' ]) works great.

I wrote: https://gist.github.com/tbranyen/9667269 to solve that problem slightly differently via configuration. I suppose this will be one place of contention. I like simple and it seems most people like simplicity. Why complicate a basic premise of requiring and sometimes transforming?

csnover commented 9 years ago

My point was that you were trivializing complexity.

OK, sorry if it seemed like that is what I was doing! I thought all I was saying was that paths is not nearly robust enough to meet the needs of authors so trying to eliminate everything else is not a workable solution.

absolute module names that get loaded via a registered resolver,

Why would you only use a resolver for absolute module ids? Are you thinking that resolver and transformer are separate steps? IIRC that is how the WHATWG loader was designed, it had a few different separate steps for these things.

The lookup for index.js is not respective of package.json or bower.json. I figured simple was better here. It always looks for index.js if you point to a directory. If you want to load a node module it has to go through the node resolver.

OK, so I guess I am not sure why the loader should have this index thing at all? How is require('./foo/') superior to require('./foo/index')? You introduce magic and a new incompatibility to save 5 characters.

I wrote: https://gist.github.com/tbranyen/9667269 to solve that problem slightly differently via configuration. I suppose this will be one place of contention. I like simple and it seems most people like simplicity. Why complicate a basic premise of requiring and sometimes transforming?

I think I did not explain myself clearly.

Let’s say you are writing some hypothetical module that generates output. It wants to write output to the DOM in the browser or write output to a stream in not-a-browser. For the DOM part you want to use jQuery for . jQuery will throw an error if you load it into an environment without a DOM API. How do you propose this module will work? With an AMD plugin today it is trivial:

define([ "dojo/has!host-browser?jquery" ], function (jquery) {
  return {
    log() {
      if (jquery) {
        // ...
      }
    }
  };
});

Best,

jrburke commented 9 years ago

In general, I think what you are looking for are changes in loader implementation and config API for that loader.

That is a fine thing to push for, but I would avoid creating a new declarative module API (the define part of AMD). People are distracted by the promise of an ES standard, and even though I feel that is further off and people today are much better served just sticking with an existing module format, I do not think people want to spend time trying to parse out new authoring format/rules.

However, AMD users adopting a new loader seems perfectly fine thing to do and the sort of freshness people like to entertain.

So, I would position this effort as a "a modern AMD loader and toolchain for a modern time", and not call it AMDv2.

For me, I am interested in a loader that had this sort of API:

https://github.com/jrburke/module/blob/master/docs/loader-config.md

but loaded AMD modules, so did not have module as the API, but require. That is the general direction I have been thinking for a requirejs 3.0, although likely under a different name. I have not seen what dojo has been thinking though.

Some notes on your items given that context:

Limit options to fulfill and consolidate where possible. Hopefully only paths and map will need to be implemented

The locations config in the above link speaks to this concern. As others on this thread have said though, there are distinct config differences for things that deal with paths vs things that deal with module IDs, and that distinction should be kept. Hopefully with locations config mapping to a locate hook would help with clarifying the point in the loader lifecycle that it applies to.

Requiring a module that hasn't been loaded won't throw an error, it returns undefined and loads under the hood, a subsequent call will return the cached module Super useful for browser dev tools

As others have pointed out, this is hazardous. ES modules won't work that way, and if the concern is needing a callback for a require() done in a browser web console, That will also be a concern in ES modules. Maybe ES7 await/async stuff will help with that long term.

For now, in the browser console type require(['a']);, tap return, then type require('a'), hopefully the load for the first call happens fast enought that the second thing typed returns the export fine.

Support loading index.js from folder paths src/ would load src/index.js

This sounds like switching the default for packages config from main.js to index.js. That seems like a reasonable thing to do, but as others have mentioned, a way to indicate what the package .json metadata specifies for "main" is still needed.

No more global local requires, ex: require('a') vs require('./a')

I didn't follow this one. This is maybe influenced by how node goes about module resolution? But node's resolution mixes up paths and module IDs too much, which does not make sense in particular once modules can be bundled. Module IDs need to be separate entities from paths.

All unresolved global modules must end up at a resolver that is sophisticated enough to allow node or bower lookups

If "resolve" means use node's resolve to find a module path, then that seems like a possibility. Although not very portable for loaders loaded in the browser that do dynamic loads. Anything that requires a file system scan, would be bad in the browser. Dojo did this way back for i18n locales, and there was a constant issue of developers thinking something was wrong when they looked in the web console and saw a bunch of 404 errors.

If "resolve" means use node's module system to create the module, then r.js does this today when run in node. There are limits to its effectiveness because a mixed case, where a path-found module loaded by r.js -> node-instantiated module -> define'd module that should be found via r.js config. This is because of the node module system though.

For any case of "resolve" though it seems like it would only be for a specific environment, and specifically hard to support for dynamically loaded code in the browser.

In general, node's file layout and package.json scanning is not friendly for web-based module loaders. Same with bower's default layout. If those package managers would do the equivalent of adapt-pkg-main that would help a lot, avoid config. Then map/alias config for any nested dependencies that cannot be flattened.

You can require a module under more than one name

That sounds like map config.

Limit anonymous define syntax to AMD-style: define(Array, fn) or SCJS-style: define(fn) no ambiguity in between

This can be encouraged through user practice, or even by a loader that just wanted to be stricter, but as mentioned above, I do not think it is worth baking this into a new AMD declarative module API due to developer fatigue about module formats.

Named modules are allowed, but are discarded if loaded from a path

This sounds like "if a named module is loaded but the name expected by the loader is different, the loader should use the name it expects". That may be possible in modern browsers and if limited to IE 10+. It seems a bit early to cut out IE 9, but again, something a new loader could decide to do. It might complicate things a bit, as related to files loaded that have multiple named modules, so I would probably place this one low on the list, would need some good tests and experiments before promising this feature.

Change plugin syntax to be extension based:

@csnover addressed talked to this, but to add a bit more:

What about jquery plugins that are like 'jquery.scrollbar'? Does that qualify?

The main issue is treating a module ID like a path. It is a lot cleaner to treat it as just a module ID. Plus, different plugins can operate on multiple file extensions, and have some that operate on the same file. A 'text' plugin can load .txt and .html, and a 'template' plugin could also handle .html files. If both plugins are used in the same project, I do not see how to resolve that conflict, without a lot more config.

It is best to treat loader plugins as something that can be explicitly parsed out. What could give you something like this would be a loader that had those explicit function hooks for the module lifecycle as mentioned in the loader-config.md link, and if you wanted to create a shortcut for that sort of behavior on a per-project basis, overriding the normalize hook to convert select file extensions to prefixed 'plugin!' module IDs would be possible.

tbranyen commented 9 years ago

@jrburke awesome, thanks for the feedback definitely a lot to think through.

So, I would position this effort as a "a modern AMD loader and toolchain for a modern time", and not call it AMDv2.

This is how I initially approached it, but I've been convinced that we need more. I want to implement a future loader and optimizer to a reproducible standard. I'd be happy to change the name from AMD v2, but I chose that right now so that everyone knows where I'm coming from.

The locations config in the above link speaks to this concern. As others on this thread have said though, there are distinct config differences for things that deal with paths vs things that deal with module IDs, and that distinction should be kept. Hopefully with locations config mapping to a locate hook would help with clarifying the point in the loader lifecycle that it applies to.

Yes, I like locations, I need to read more into how this works, but is precisely the kind of consolidation I'm trying for.

For now, in the browser console type require(['a']);, tap return, then type require('a'), hopefully the load for the first call happens fast enought that the second thing typed returns the export fine.

That's exactly how this works, except without needing []. I'm still not convinced it's a problem. I've answered all the hesitations from @treasonx and @csnover. Any errors that occur fail the Promise/throw an exception which aborts any module requiring it.

This sounds like switching the default for packages config from main.js to index.js. That seems like a reasonable thing to do, but as others have mentioned, a way to indicate what the package .json metadata specifies for "main" is still needed.

Yeah, that would actually be perfect. Makes it more explicit than /.

I didn't follow this one. This is maybe influenced by how node goes about module resolution? But node's resolution mixes up paths and module IDs too much, which does not make sense in particular once modules can be bundled. Module IDs need to be separate entities from paths.

This comes as inspiration from Node. If you do not use a relative path, it's a module identifier and is passed on to a resolver (like node_modules lookup). I've had so many less problems with AMD and configuring paths if I stick to relative. baseURL's are awful and looking up modules from them only causes problems when it comes to testing and optimizing, unless you're careful and experienced. I still think a baseURL is necessary, but I do not think it should prefix absolute identifiers.

In general, node's file layout and package.json scanning is not friendly for web-based module loaders. Same with bower's default layout. If those package managers would do the equivalent of adapt-pkg-main that would help a lot, avoid config. Then map/alias config for any nested dependencies that cannot be flattened.

I disagree very much here. I think we look over the fact that node_modules and bower_components are laid out ideally for this very thing. It blows my mind that nobody has implemented this before. If you do it correctly you will only ever see a single 404 if there is a legitimate error. All the information you need to find a package in node_modules/bower_components exists in the package.json and bower.json files.

That sounds like map config.

Yeah, I was specifically alluding to paths.

This can be encouraged through user practice, or even by a loader that just wanted to be stricter, but as mentioned above, I do not think it is worth baking this into a new AMD declarative module API due to developer fatigue about module formats.

It just drives me nuts that there are sooooo many ways to define a module and some of them are wicked confusing, for instance lodash uses:

define(function() {
  return _;
});

Is this AMD? Maybe Simplified CommonJS? Is it mixed? The fact that mixed even exists speaks to a very large problem with the specification. It gets even more confusing when you see an example like this and try and suss out what the module value is:

define(function(require, exports) {
  exports.value = 'am i a property on the module value?';
  return { value: 'or am i?' };
});

What about jquery plugins that are like 'jquery.scrollbar'? Does that qualify?

It would if you had a "scrollbar" plugin registered. Although maybe this is better served under something new called extensions. We can register extensions formally (including the .js extension like Node does).

Thanks again!

fskreuz commented 9 years ago

Some things I'd like to pitch in:

Just my two cents (and wish list). Been using AMD eversince and still prefer it over Browserify.

jrburke commented 9 years ago

@tbranyen:

I wonder if mostly what you are looking for is more uniformity across amd-based projects. If projects are laid out to convention, it avoids a lot of configuration. I can see the case for just better project setup tooling and evangelizing it. This is really what node has too, all projects are laid out the same, so it avoids some configuration issues. They can do directory scanning too, but we can avoid that with some tools that know about the AMD conventions.

The tooling can also just be focused on creation/install time actions, not needed to run tooling for every app file change. So one of the great benefits of AMD loading in the browser, no need for build tools to start, is still maintained. I think the tooling could work out to be the following. I will focus on npm-based tooling, but a similar set could be made for bower. The names are just placeholders, to illustrate the concept:


1) amd-create-npm: creates a new project that sets up the baseUrl to be node_modules, and all the app modules go in a sibling app directory. Then only one paths config is needed for app. create-template is a project template along those lines, where app.js is where the config lives. Adapting that style of project setup for node_modules and have this tool allow automation of that setup seems very doable.

For all files in in the 'app' directory, './relativeId' require() calls are used for modules in the app dir. For third party code, like 'jquery', those are still top level ID references, and would be loaded from the node_modules directory.

2) amd-npm: uses npm underneath, but after an npm install or uninstall, runs adapt-pkg-main to create node_modules/pkg.js files next to the node_modules/pkg/ directory, which just requires the "main" module that was specified in the package.json. This completely avoids the need for package config in the app.js.

If the node_modules ended up having some nested node_modules because npm dedupe did not completely work, then it can insert a map config in the app.js for that case.

This assumes the packages installed are AMD compatible, but there could be a flag in the app's package.json to convert cjs modules on amd-npm install.


With those tools, I believe that will homogenize projects, and the bonus is that the user does not deal with config manually any more, unless they want to get fancy for things like waitSeconds.

Some feedback from your previous reply to me:

This comes as inspiration from Node. If you do not use a relative path, it's a module identifier and is passed on to a resolver (like node_modules lookup). I've had so many less problems with AMD and configuring paths if I stick to relative. baseURL's are awful and looking up modules from them only causes problems when it comes to testing and optimizing, unless you're careful and experienced. I still think a baseURL is necessary, but I do not think it should prefix absolute identifiers.

I feel like this is perhaps a lack of better messaging around how to refer to modules. If wanting something that is relative to another module in a directory (package), then './' is the thing to use.

Maybe the confusing part is that AMD allows you to just lay all your modules flat under baseUrl, and those modules could be a mix of local and third party code. If that is the case, then I believe the project layout above, that sets baseUrl to the node_modules and then just one paths config for app, with all the local modules in app, would address this concern.

I disagree very much here. I think we look over the fact that node_modules and bower_components are laid out ideally for this very thing. It blows my mind that nobody has implemented this before. If you do it correctly you will only ever see a single 404 if there is a legitimate error. All the information you need to find a package in node_modules/bower_components exists in the package.json and bower.json files.

Maybe what you mean to suggest here is that the loader wants to resolve 'pkgName', it should ask for pkgName/package.json via an XHR call, JSON.parse it, then find the "main" module and then create an internal mapping so that when a module asks for 'pkgName', the 'pkgName/mainValue' is used for that dependency?

If so, then that is what adapt-pkg-main does, but just writes out a module at 'pkgName.js' that points to the main module, and does it at install-time. This avoids the need for CORS for when the JS is on a different domain than the web page. It also optimizes uniformly, does not require any special adapters in build tools or almond-like simpler AMD API shims.

If the concern is perhaps a tool modifying contents in node_modules, npm does this today with pacakge.json files, and to compile binary pieces. So an amd-npm tool should be fine in doing this.

Is this AMD? Maybe Simplified CommonJS? Is it mixed? The fact that mixed even exists speaks to a very large problem with the specification. It gets even more confusing when you see an example like this and try and suss out what the module value is:

AMD was made that way to try to reduce the amount of boilerplate that a devloper had to type, since that has been a criticism of a module system that uses a wrapping. While it would be nice to reduce the combinations now, I think there would be some pushback from some developers on the added verbosity.

Also see my comments about the social lack of interest in entertaining more module systems. If we wanted to really make a pass at a new declarative form, then I would favor going further than just massaging the define signature, and going more with something like jrburke/module, but with module.define(function(){}) as the define(function(){}) replacement. That module system fixes other issues like inline nested module defines, and clearer top-level loading APIs.

I just do not see any appetite from developers for it given the ES cloud of uncertainty, and node's unwillingness to consider changes in their module system, even though it is getting dated as more people want things to work well for dynamic loading in the browser. In my fantasy world, we could work out something with the node folks that was better, and just skip waiting for ES. I just don't see it with the node/io.js split and the illusion that ES modules are "just around the corner".

But focusing on tooling to create homogenized AMD setups could go very far, as it is directly useful to developers today. They get immediate benefit vs. having to learn and understand new specs for unclear benefits. If the tooling for project setups is very successful, then that tooling can choose stricter forms for define() calls and config and allow us then to later make loaders that just target only those smaller stricter forms.

KidkArolis commented 9 years ago

npm@3 will have a much more predictable install/dedupe behaviour. It might be possible to scan node_modules and autoconfigure everything without any 404s by including the npm install/dedupe algorithm into the loader. The entry point would be package.json. The problem with npm@2 is that you would get 404s depending on how the node_modules happens to be structured, it's not always the same depending on whether you ran npm update, update install or npm dedupe, etc.

Also, for npm to work well, the loader must support CJS+xhr out of the box.

This is the direction Rave.js was going to. It's a pity John has no more time to work on it. What about picking up where he left off, there are some really good ideas there and the code is very nice.

On Wed, 18 Mar 2015 at 18:26 James Burke notifications@github.com wrote:

@tbranyen https://github.com/tbranyen:

I wonder if mostly what you are looking for is more uniformity across amd-based projects. If projects are laid out to convention, it avoids a lot of configuration. I can see the case for just better project setup tooling and evangelizing it. This is really what node has too, all projects are laid out the same, so it avoids some configuration issues. They can do directory scanning too, but we can avoid that with some tools that know about the AMD conventions.

The tooling can also just be focused on creation/install time actions, not needed to run tooling for every app file change. So one of the great benefits of AMD loading in the browser, no need for build tools to start, is still maintained. I think the tooling could work out to be the following. I will focus on npm-based tooling, but a similar set could be

made for bower. The names are just placeholders, to illustrate the concept:

1) amd-create-npm: creates a new project that sets up the baseUrl to be node_modules, and all the app modules go in a sibling app directory. Then only one paths config is needed for app. create-template https://github.com/volojs/create-template is a project template along those lines, where app.js https://github.com/volojs/create-template/blob/master/www/app.js is where the config lives. Adapting that style of project setup for node_modules and have this tool allow automation of that setup seems very doable.

For all files in in the 'app' directory, './relativeId' require() calls are used for modules in the app dir. For third party code, like 'jquery', those are still top level ID references, and would be loaded from the node_modules directory.

2) amd-npm: uses npm underneath, but after an npm install or uninstall, runs adapt-pkg-main https://github.com/jrburke/adapt-pkg-main to create node_modules/pkg.js files next to the node_modules/pkg/ directory, which just requires the "main" module that was specified in the package.json. This completely avoids the need for package config in the app.js.

If the node_modules ended up having some nested node_modules because npm dedupe did not completely work, then it can insert a map config in the app.js for that case.

This assumes the packages installed are AMD compatible, but there could be

a flag in the app's package.json to convert cjs modules on amd-npm install.

With those tools, I believe that will homogenize projects, and the bonus is that the user does not deal with config manually any more, unless they want to get fancy for things like waitSeconds.

Some feedback from your previous reply to me:

This comes as inspiration from Node. If you do not use a relative path, it's a module identifier and is passed on to a resolver (like node_modules lookup). I've had so many less problems with AMD and configuring paths if I stick to relative. baseURL's are awful and looking up modules from them only causes problems when it comes to testing and optimizing, unless you're careful and experienced. I still think a baseURL is necessary, but I do not think it should prefix absolute identifiers.

I feel like this is perhaps a lack of better messaging around how to refer to modules. If wanting something that is relative to another module in a directory (package), then './' is the thing to use.

Maybe the confusing part is that AMD allows you to just lay all your modules flat under baseUrl, and those modules could be a mix of local and third party code. If that is the case, then I believe the project layout above, that sets baseUrl to the node_modules and then just one paths config for app, with all the local modules in app, would address this concern.

I disagree very much here. I think we look over the fact that node_modules and bower_components are laid out ideally for this very thing. It blows my mind that nobody has implemented this before. If you do it correctly you will only ever see a single 404 if there is a legitimate error. All the information you need to find a package in node_modules/bower_components exists in the package.json and bower.json files.

Maybe what you mean to suggest here is that the loader wants to resolve 'pkgName', it should ask for pkgName/package.json via an XHR call, JSON.parse it, then find the "main" module and then create an internal mapping so that when a module asks for 'pkgName', the 'pkgName/mainValue' is used for that dependency?

If so, then that is what adapt-pkg-main does, but just writes out a module at 'pkgName.js' that points to the main module, and does it at install-time. This avoids the need for CORS for when the JS is on a different domain than the web page. It also optimizes uniformly, does not require any special adapters in build tools or almond-like simpler AMD API shims.

If the concern is perhaps a tool modifying contents in node_modules, npm does this today with pacakge.json files, and to compile binary pieces. So an amd-npm tool should be fine in doing this.

Is this AMD? Maybe Simplified CommonJS? Is it mixed? The fact that mixed even exists speaks to a very large problem with the specification. It gets even more confusing when you see an example like this and try and suss out what the module value is:

AMD was made that way to try to reduce the amount of boilerplate that a devloper had to type, since that has been a criticism of a module system that uses a wrapping. While it would be nice to reduce the combinations now, I think there would be some pushback from some developers on the added verbosity.

Also see my comments about the social lack of interest in entertaining more module systems. If we wanted to really make a pass at a new declarative form, then I would favor going further than just massaging the define signature, and going more with something like jrburke/module https://github.com/jrburke/module, but with module.define(function(){}) as the define(function(){}) replacement. That module system fixes other issues like inline nested module defines, and clearer top-level loading APIs.

I just do not see any appetite from developers for it given the ES cloud of uncertainty, and node's unwillingness to consider changes in their module system, even though it is getting dated as more people want things to work well for dynamic loading in the browser. In my fantasy world, we could work out something with the node folks that was better, and just skip waiting for ES. I just don't see it with the node/io.js split and the illusion that ES modules are "just around the corner".

But focusing on tooling to create homogenized AMD setups could go very far, as it is directly useful to developers today. They get immediate benefit vs. having to learn and understand new specs for unclear benefits. If the tooling for project setups is very successful, then that tooling can choose stricter forms for define() calls and config and allow us then to later make loaders that just target only those smaller stricter forms.

— Reply to this email directly or view it on GitHub https://github.com/tbranyen/amdv2-wishlist/issues/1#issuecomment-83110879 .

ca0v commented 8 years ago

Will there continue to be a need for loaders or will ES6 modules eliminate that need? Does ES6 solve the performance problems that bundling solves?

aluanhaddad commented 8 years ago

@csnover

AMD conflates module format and loader, so it’s important to clarify which parts are important and which parts go away. I personally have no qualms about AMD dying to be replaced by a high-quality standardised module format and/or loader that covers all known use cases.

I absolutely agree. We become overly attached to the identity of our tools.

paths is a physical module ID to path mapping. map is a logical module ID to module ID mapping, and include and exclude operate on the module ID level as well. After you have done a build, paths is never used, but map can continue to be used to shim/replace modules at runtime.

This is surprising behavior to say the least. It makes it hard to reason about behavior and defeats attempts at tooling and code analysis. I would also argue it is well in the vein of conflating module formats with with module loaders in its own right. One of the unpleasant things is that more and more people are using CommonJS as a transpilation target. The static nature of ESModules is a major step in the right direction.

We are working on new loader proposal right now for Dojo 2 and were going to bring this up for public feedback soon. Some of the contributors have been already talking to other platform-agnostic loader vendors (like James Burke) so would be happy to subsume wishlist under that initiative and help bring it to completion. That said, it’s not clear what is going to happen with the WHATWG loader at this point but I think that it doesn’t make sense to start an “AMD 2” without knowing what is going to happen upstream.

I agree that it makes no sense to create an "AMD 2", fragmenting an already fragmented community of web developers trying to make sense of simultaneous trend towards ES6 module syntax and CommonJS transpilation. However, if Dojo 2 uses a first party loader, I will avoid it as the plague. It is absolutely terrible trying to use a toolkit that integrates its own loader.

I don’t think named modules should be a thing at all

:100:

If you have a module that you want to distribute that is usable across multiple platforms, and in the browser case it has a dependency on a different module that assumes the DOM API is available (say it accesses document from its factory), what is your proposed solution? Right now define([ 'dojo/has!host-browser?domModule' ]) works great.

That's all well and good until you consider the loader itself as a platform. Once something is written like that and shipped, every external tool has to parse all variants of the syntax. Plugin syntax hurts interop between module formats.

Several of our customers use this functionality to layer new functionality onto existing modules, to alias modules at runtime, and to introduce hot fixes, even in the presence of built layers.

And every external consumer of said customer(s) packages hates working with them.

@tbranyen

...very literally. I'm intentionally conflating to get use cases and implementations on the table. Currently no loaders operate in harmony and that's really sad.

Yeah it is awful. And another module format will make things worse.

I disagree very much here. I think we look over the fact that node_modules and bower_components are laid out ideally for this very thing. It blows my mind that nobody has implemented this before. If you do it correctly you will only ever see a single 404 if there is a legitimate error. All the information you need to find a package in node_modules/bower_components exists in the package.json and bower.json files.

As @fskreuz said,

Generic - Module loading should not depend on the file layout of Node modules or Bower libs. It should be somewhat generic.

Both layouts are problematic and incredibly far from ideal. npm's node_modules schema still has to resort to nesting and duplication to support managing multiple transitive versioning. Bower's bower_components structure doesn't even try. In my opinion, jspm gets this right, the flat versioned folder structure and denormalized configuration is really something that works incredibly well. Regardless, why bake an assumption about package directory structure into the loader? TypeScript did this, for loading design time declaration files, and it is problematic to say the least. This kind of thing needs to be configurable at installation time to allow transitive mapping to work correctly without assuming a layout.

Is this AMD? Maybe Simplified CommonJS? Is it mixed? The fact that mixed even exists speaks to a very large problem with the specification. It gets even more confusing when you see an example like this and try and suss out what the module value is:

define(function(require, exports) { exports.value = 'am i a property on the module value?'; return { value: 'or am i?' }; });

That's a good point. That is undeniably confusing.

I realize I arrived at this discussion pretty late, but I wanted to make these remarks regardless.

One thing that I am frustrated by is that I feel that there is a general and unhelpful attitude of AMD vs WebPack/Browserify, and that people are forgetting that there are projects like SystemJS + jspm that are also browser oriented. We need better browser oriented tools that are not built on Node/CommonJS assumptions but are still able to consume modules written for them. I think that is a shared goal of the AMD and SystemJS + jspm communities and of the whatwg loader specification.