tc39 / proposal-built-in-modules

BSD 2-Clause "Simplified" License
891 stars 25 forks source link

Module specifier: npm-style "@prefix/module" or something else? #12

Open zkat opened 5 years ago

zkat commented 5 years ago

Myself and several others at npm (a voting member) feel that we should be using the existing cowpath for scoped module syntax (@std/foo) instead of std:foo or other such alternatives.

While this would usually come down to a simple matter of taste, in this case, we have 254,091 scoped packages in the registry already using this syntax. That's more than most other package registries for other languages, as-is, and is a significant % of our ~950k packages on the main npm registry.

This syntax would also be beneficial because it allows an existing mechanism for polyfills that will work with all current and past versions of node, and can easily be made to work with the browser module system through this proposal and others specifying what non-./ specifiers are supposed to do to resolve.

isaacs commented 5 years ago

@devsnek

(@std/ especially, since it overlaps with npm)

I don't understand how this "overlaps with npm". There is nothing npm-specific about this approach. npm landed on this convention by process of elimination, since it was the only thing that could work for namespaced modules without changing the module system, filesystem, or existing conventions for specifying and fetching modules.

The one thing that stands out to me most in all this is that by using a string prefix in the specifier, we trample over the current behaviour which is that every string is a valid specifier that the implementer can do anything with.

It sounds like you just don't want a string literal to be the argument to import at all.

glen-84 commented 5 years ago

@isaacs,

For new developers, after installing a package from npm:

npm i @some/thing

Isn't it reasonable to expect that they would also expect to install @std/thing with npm?

What makes it different, other than documentation?

What would happen?

Built-in modules are not the same as user-land packages:

Having them coexist within the same namespace is just looking for trouble. There have already been issues trying to reclaim names in npm. I've read people suggesting that npm just "forcefully" take currently-used names when they think it makes sense. That's outrageous.

And which of these is correct? (answer quickly)

import fs from "@nod3js/fs";
import fs from "@modejs/fs";
import fs from "@nodejs/fs"; 
import fs from "@node/fs";

Are you going to start registering all scopes that might look similar, for every runtime (std, nodejs, and all future technologies)?

What is wrong with just differentiating them?

import mod from nodejs "fs";
import pkg from "@some/thing";

I'm all for consistency, but this just isn't the same thing.

demurgos commented 5 years ago

@glen-84 I don't believe people will expect to install "@std/lib" with npm more than current Node core modules ("fs", "path") etc. I don't have any stats, but I believe people are able to make the difference and don't try to install those from npm just because they are module identifiers.

After reading the arguments, I tend to agree with @isaacs: using the existing notation is a good thing. It will avoid increasing the complexity of the language / edge cases and leave module resolution to the loader. I'd be OK with going a step further and using a custom protocol ("std://thing") but having a special syntax (not string literals) seems to go too far.

glen-84 commented 5 years ago

@demurgos,

There's probably a reason for the 280k weekly downloads of https://www.npmjs.com/package/fs?

And an example of the confusion @ https://github.com/npm/security-holder/issues/10.

demurgos commented 5 years ago

Thanks for the link: I wasn't aware that there was so much confusion around this: a few thousand published packages depend on fs!

I still think that string literals should be used, but it makes a point in favor of a custom URL scheme/protocol.

ljharb commented 5 years ago

Since any scope we chose would need to be reserved in advance anyways, there’d be nothing for them to install - in other words, it would already error out, and everybody would already be prevented from installing it. Future non-npm registries would have the context to pre-reserve any needed scopes.

domenic commented 5 years ago

@ljharb while that may be true for environments that get in "on the ground floor", it doesn't scale very well, as I outlined earlier: https://github.com/tc39/proposal-javascript-standard-library/issues/12#issuecomment-447171228

ljharb commented 5 years ago

That seems to be the same problem as with any other syntax, except that npm woudnt exist as a well-understood mechanism for it. Hosts can’t completely control anything if import maps land - the ecosystem can always establish an overridden meaning for a protocol prefix. eg, I can create blinker: in the ecosystem the same way i could with @blinker/ - either by publishing a loader that uses it, or an import map tool, etc - and cause the company to need to adjust their name or risk the collision if they moved too slowly (the same risk as not claiming a trademark or a domain name quickly enough).

“Scope-squatting” is already the norm, since a scope is just a user account or org - every username on npm is technically scope squatting until they publish a scoped package, and worrying about the proliferation of these seems like both npm’s concern, and something they are unconcerned about.

domenic commented 5 years ago

Yes, it's precisely the fact that they're unconcerned about it that makes me think perhaps scopes/npm usernames are not the best mechanism to build standard library prefixes on top of.

littledan commented 5 years ago

@domenic

it doesn't scale very well,

Is the potential difficulty of allocating namespaces partly mitigated if we go with putting the standard modules in a single, shared namespace?

I can see how it can be worrying to align with a privately managed namespace. But right now, I imagine that if you're making a device that runs with JavaScript, you might want to make desktop development tools available for it on npm anyway. Using a separate syntax for the namespace might not end up getting around the listed problems in practice; they may be better resolved by working with npm.

If we use scheme:module, we might want to register each prefix with IANA as a scheme, to make sure URLs don't reuse the same thing.

Since module specifiers aren't URLs in most environments, I'm not sure how important this is. (Some URLs are module specifiers, but that's the extent of the relationship.) Similar to the npm scopes issue, assuming that a registry designed for one thing should also be used for built-in module prefixes is bound to lead to confusion and weirdness.

I don't quite understand this: If the definition of URLs might expand in the future to include yet-to-be defined schemes, and we might want to be able to load modules from these URLs on the web, then isn't there a risk of an ambiguous interpretation or blocked evolution path? Maybe the concern is theoretical if we're probably not going to expand the definition of URLs on the web in this direction.

domenic commented 5 years ago

On the web we fetch modules only from fetch schemes, so there is no risk there.

isaacs commented 5 years ago

Apologies for the length. There's a tl;dr at the end.

So, let's dig into userland collisions with core module names. This is an issue that we care about at npm, and is one reason why I'm eager to move node towards @nodejs/<name> instead of the current "fully unscoped" approach, and I can share some of what I think has gone well, and what hasn't. I think that it's instructive to this discussion to see how this has played out in practice in a large JavaScript community relying on an ecosystem of core modules and userland packages.

fs was always a "squatted" package on npm. Before npm took it over, it was just an index.js that logged I'm `fs` modules. If I remember correctly (big if, but I can check with our support logs when I'm back from vacation), the module would be fetched if someone used early versions of browserify on code that had require('fs'). This was fixed by the host platform (ie, browserify, not npm). We took it over so that it could not be used maliciously, and because that is generally how we handle squatting in cases where it could likely become problematic. There are also cases of people depending on fs because they believe they need to.

Most of the packages that depend on fs are relatively low traffic, and in any event, it's harmless to do so. They get an extra file in node_modules, that's all. Since fs was always a core part of node, it hasn't caused any particular upset, except for the 1 or 2 confused people wondering why this "security holding package" still seems to allow them to mess with files.

The meatier issues, if you want to point to them, are: stream, crypto, zlib, and domain. All of these were later additions to the node core module set, and 3 were in use prior to their inclusion in node core. In the case of zlib, the API surface changed somewhat in the process, but was primarily additive. In the case of domain, it was published shortly after the name was "reserved" for core usage, and then renamed to cqrs instead.

The case of stream is interesting as well. It is specifically a polyfill to provide node-style streams in the web browser. In this case, the ability to collide was a feature, not a bug, because you can (in many cases) run the exact same code on the browser and server by bundling this module. (It doesn't look very active any more, so maybe there'd be some interest in bringing it up to date with node's current stream impl? I don't know, you'd have to ask Julian.)

npm's goal is to reduce friction for JavaScript developers, and unexpected collisions are indeed a source of friction. It's not a huge issue, but it's not nothing. If Node.js were to use @node/fs instead of simply fs, then it would be much easier for us to be aware of the existence of collisions, especially if Node.js expands its scope of core functionality. Additionally, this would still leave the door open for future-feature-matching polyfills and trying out experimental modules prior to their inclusion in core.

Furthermore, if the npm client knew that @node is a namespace of core modules, it could check to see if the current node version supports that module at that version, and avoid installing it, but still track that the user depends on it. It would allow the node core team to easily see who is depending on which bits of the platform when they consider making changes or upgrades. It could even allow for removing dependency hell even in the platform itself, since one module and another might depend on different versions of @node/stream or @std/promise, and not be limited to sharing the one provided by the core platform.

Regarding "scope squatting", it's a much smaller problem (but again, not zero) compared with unscoped package name squatting. The problem is much smaller because (a) multiple users and multiple packages can use one scope, so you're not likely to need many of them, and (b) since a scope communicates identity rather than functionality, it's not such a big deal if you have to choose an alternative name. For example, I'm "isaacs" here and on npm, but "izs" on twitter, and enough people assume I'm izs here that I went ahead and registered @izs and had it redirect.

Also, if you're creating a new platform, you only have to register a scope once, instead of figuring out an unused name for each module you create.

So, it's not that we don't care about this serious issue or anything. We know exactly how serious it is, and that's why we're not super concerned. In the rare cases where a namespace collision is actually a problem, it's relatively straightforward to address it.

Regarding the friction created in situations where a module exists both on npm and in node core, there are some things we could do to make this better. (These all exist in some product backlogs, but they're not super high priority, because again, the issues are minor and don't affect very many people.)


tl;dr - Name collisions are an issue, but a pretty small one in practice, and occasionally a good thing for polyfilling and user intent broadcasting. Scope-squatting is a much smaller issue, and avoids the bad parts of module name collisions while preserving the good parts. There are ways that npm could surface this better to users, and we plan to, but since the issues are minor, it hasn't been a high priority.

littledan commented 5 years ago

@domenic Right, the question would be, is the Web OK with closing off potentially adding more fetch schemes over time?

domenic commented 5 years ago

Although I would probably want to consult more widely before being definitive, I think the answer is probably "yes". This is based on the pain of the http: -> https: transition not being something anyone wants to repeat, and subsequent work to ensure that upgrades to the transport protocol all continue to use the https: scheme.

devsnek commented 5 years ago

@littledan

they may be better resolved by working with npm.

if we create issues with an abstract language specifiation that involve working with specific companies for the foreseeable future to overcome I think we should rule out those ideas. npm could go out of business or be bought by oracle tomorrow.

zenparsing commented 5 years ago

It seems like there is a fundamental disagreement over whether overloading the @-namespace is a good thing or a bad thing.

The argument for the overload being a good thing seems to rest on the idea that whether a module is built-in or userland is an implementation detail that users shouldn't have to worry about. I agree that this general principle worked to Node.js's great advantage its early days. With require, userland modules are on equal footing with Node built-ins, and that encourages userland solutions. That's a good thing!

I think it's important to note that whether we go with std: or @std/, userland solutions are still on equal footing, if not slightly advantaged (by virtue of not requiring a scope or scheme at all).

littledan commented 5 years ago

@domenic Your explanation in https://github.com/tc39/proposal-javascript-standard-library/issues/12#issuecomment-449657456 makes sense for not adding more network schemes, but do you think fetch may add more schemes like blob or data in the future? (I don't have any particular ideas for something to add here.) cc @annevk

Mouvedia commented 5 years ago

@littledan ws: and wss:?

isaacs commented 5 years ago

@zenparsing

Well put, but there is one aspect that I think you overlooked.

I think it's important to note that whether we go with std: or @std/, userland solutions are still on equal footing, if not slightly advantaged (by virtue of not requiring a scope or scheme at all).

std: is only on an "equal footing" if users have module name maps to resolve foo:bar to a userland module. In that case, it's just Yet Another way to do it, which doesn't ultimately communicate anything anyway, since it will be used by users, and @foo/bar is entrenched. We already have semicolons and indentation, let's not add more pointless differences to quibble over.

@devsnek

if we create issues with an abstract language specifiation that involve working with specific companies for the foreseeable future to overcome I think we should rule out those ideas. npm could go out of business or be bought by oracle tomorrow.

That is unhelpful snark, and I find it frustrating that you keep returning to this sort of fud. There are 3 "specific companies" making popular web browsers right now. Should we not take their inputs seriously, simply because they could "go out of business or be bought by oracle"?

As I've said earlier, @std/foo isn't "the npm way", and there's nothing preventing anyone else from using it. In fact, some of our competitors in the enterprise package repository space do use it, and several other package manager clients support it as well. It's a convention for module namespaces that the JavaScript community at large has settled on.

Criticizing a suggestion because of its origins rather than its merits is ad hominem noise. Please take it elsewhere.

littledan commented 5 years ago

This thread seems to be getting a bit heated. I think @isaacs and @devsnek have both made their points here.

Pauan commented 5 years ago

@isaacs That is unhelpful snark, and I find it frustrating that you keep returning to this sort of fud.

To be clear, they were not referring to the syntax, they were referring to a requirement to cooperate with npm with regards to name squatting and name disputes (which I agree should not be a requirement for a JavaScript standard).

jdalton commented 5 years ago

No programming language is an island. Cooperation, from multiple parties, is how things get done.

FWIW npm is also an active member of the TC39.

Mouvedia commented 5 years ago

Again,

Are we talking about only one std scope or will this be the foundation of many more to come?

This will determine what will eventually be chosen. More precisely, will the standard library be enough for all target platforms? We already have a hard time filling it with hypothetical modules; will std not just end up as an implementation example of a scope?

tl;dr: is this a skeleton for soon-to-come niche scopes or will it be released with tons of undeniably useful modules? Can we really aim for both?

isaacs commented 5 years ago

@Pauan

a requirement to cooperate with npm with regards to name squatting and name disputes (which I agree should not be a requirement for a JavaScript standard).

I'm not sure what situation would require any JS standards body to be dependent on npm for naming collisions.

If you want to take over a name within npm, sure, you'll have to talk to us. Similarly if you wanted to take over a GitHub namespace or Twitter account.

If the language specification says import { strftime } from '@date/format', then yes, that will collide with the existing @date organization on npm. That just means that adding that to the standard will be disruptive, and is probably not a great idea. You don't need to talk to npm, you need to talk to the owners of that scope, or figure out if maybe there's another namespace that would be better (like, eg, @std/date-formats or something).

But, at the end of the day, if TC-39 decided to add that to the language specification, then anyone doing import { strftime } from '@date/format' would (I presume) get the stdlib version, not the userland version, since the language builtin would trump the userland implementation. (Just as things like fs do today in Node.js.)

Note: this collision issue is exactly the same level of problem no matter what string syntax is chosen, because (a) module maps make it possible for users to send the resolution anywhere, and (b) even without module maps, most JavaScripters transpile their code anyway, and (c) the strong observed desire for consistency will (I predict) result in people using the standard style in their userland code. (For example, see the proliferation of import in Node.js code, where the userland implementation is subtly different from the specification, and the bugs that have resulted.)

There is no standard library module naming convention that will fully reduce the risk of colliding with popular userland programs. Qv the flatten-smoosh debacle from last spring. That's just part of the work.

The real question is not how to get out of talking to npm (or more accurately, talking to npm's users; and this is not advisable anyway, since the overwhelming majority of JavaScripters are npm users), but rather which stdlib module naming convention will be most straightforward and useful for people writing JavaScript programs.

Mouvedia commented 5 years ago

You don't need to talk to npm, you need to talk to the owners of that scope

I have first-hand experience in that matter and I can tell you, without a doubt, that bypassing the owner do happen.

isaacs commented 5 years ago

@Mouvedia

I have first-hand experience in that matter and I can tell you, without a doubt, that bypassing the owner do happen.

I'm not sure what you mean by that.

My point was that the language specification doesn't need the namespace on npm. TC-39 can just clobber it, and nothing in any of the proposals above suggest otherwise, or imply that the language specification would be beholden to npm or any other specific company.

It would of course be nice to be cognizant of userland impacts of specification choices, and thankfully, we have some experience in this regard that can be helpful, but implicit in the fact that it's a "standard library" in the language specification is the fact that it might trump userland implementations. (And, that being the case, polyfilling is much more straightforward if it uses the same naming conventions as userland modules.)

obedm503 commented 5 years ago

going back to suggestions, I would like to propose using some other word instead of import for standard modules. Something like use (inspired by Rust) might work very well and provides the clear distinction from userland modules that is needed. With this proposal string identifiers won't matter. Also, use works better for builtin modules because there is nothing to import over the network or from disk compared to userland.

use { Instant } from 'temporal';

use('temporal').then({ Instant }) => {
   // ...
});

I imagine dynamic use doesn't need to be treated as syntax like dynamic import is since there isn't a security risk involved in using builtins. BUT, making it syntax would make it a lot easier for transpilers and bundlers to do static analysis and automatic polyfilling.

dynamic use doesn't need to return a promise, since there isn't the network or file system involved. But for the sake of consistency returning a promise makes more sense.

Then there's the question of backwards compatibility for dynamic use since it's not a reserved keyword. This is where I'm not sure what the solution would be. Perhaps using native instead of use would allow for this, but since native is not a verb it doesn't seem to fit as well as use does.

native { Instant } from 'temporal';

native('temporal').then({ Instant }) => {
   // ...
});
Mouvedia commented 5 years ago

@obedm503 package is also reserved but that is not a verb either wouldn't be appropriate either.

ljharb commented 5 years ago

at least, not a verb that’s helpful here :-)

glen-84 commented 5 years ago

Note: this collision issue is exactly the same level of problem no matter what string syntax is chosen, because (a) module maps make it possible for users to send the resolution anywhere ...

I don't follow this. Are you suggesting that developers would:

In both cases, that's a developer decision, it's not the default.

There is no standard library module naming convention that will fully reduce the risk of colliding with popular userland programs

How would "std date" or "std:date" collide with userland packages, by default?

My point was that the language specification doesn't need the namespace on npm. TC-39 can just clobber it

Wow. That's the solution? So a new runtime "blinker" (Domenic's example) simply shadows a userland scope that may have been used in thousands of dependent packages for multiple years? So a user of an npm package/scope really has no guarantee that they'll get to keep it?

littledan commented 5 years ago

Note, it's a little hard to introduce new keywords, since they can be used as variables. Recently, TC39 has been a little chilly to the introduction of the sorts of grammar complexity and edge cases that it takes to make contextual keywords work.

obedm503 commented 5 years ago

it's a little hard to introduce new keywords, since they can be used as variables

I am aware of this, and the main reason I'm not sure how to work this out. But it's an idea worth discussing.

A possible solution to the new keyword problem might be a new directive. "use standard"; could be used to make use a reserved keyword. this would also work outside of modules in the case of dynamic use(). But I would understand if TC39 is apprehensive about new directives too.


I also thought of import native (double keyword), native being the thing that differentiates. But I have even less of an idea of how this would work with dynamic imports

import native { Instant } from 'temporal';
annevk commented 5 years ago

@littledan it doesn't seem out of the question we wouldn't introduce more "local" schemes of some kind (and browsers support more than Fetch lists, though mostly for internal use). (I could see networking happening too if something like the decentralized web ever became feasible.)

isaacs commented 5 years ago

@glen-84

My point was that the language specification doesn't need the namespace on npm. TC-39 can just clobber it Wow. That's the solution? So a new runtime "blinker" (Domenic's example) simply shadows a userland scope that may have been used in thousands of dependent packages for multiple years? So a user of an npm package/scope really has no guarantee that they'll get to keep it?

I'm not suggesting that a runtime clobbering userland modules is a wise thing to do, or should ever be the first resort. I pointed that out to show that no runtime is, ultimately, beholden to npm, regardless of the naming convention used. The runtime has all the power here, as always.

If a new runtime called "blinker" wanted to use the @blinker scope for their builtin modules, then yes, this would collide with an existing userland packages in that scope. It's probably a bad idea to do that, if for no other reason than it would likely reduce adoption of their platform. No one who isn't using blinker would be affected, of course, so browsers and node.js would just keep going on as normal.

However, consider if a runtime for embedded devices or something came out, and called itself "fooblx". They could register the (currently unused) scope on npm, and provide polyfills for their device APIs so that people could test their programs in Node.js or web browsers really easily. That's not a bad thing, it is a good thing. That's not the only benefit, it's just the most obvious counter example to the "oh no collisions!" fud. There's a benefit to runtime designers to this kind of design, because this sort of thing works today.

Using the conventions in practice in a community is a smart and kind thing to do. It opens the door for more innovation.

devsnek commented 5 years ago

@isaacs I think my big overarching point throughout this thread is that if an implementer feels pressured to support the whims of an entity external to the specification in order to have a successful implementation, I think the specification has failed. Like you yourself said, if blinker doesn't deal with the existence of npm (and others) and avoid these issues, they face problems down the road. I don't think a language specification should even begin to go into the territory of having these issues. We should directly avoid designs where these issues come up. To your point earlier about "three large companies making browsers", if google suggested using ajax: because it's a cowpath, I would bring up these same points. This isn't a problem with npm specifically, it's a problem with the larger design.

marcthe12 commented 5 years ago

We have 4 locations to resolve an import: URL, local file, native/builtin and import maps. These 4 locations need a way to specifed explicit at times. It also should be optional and instead follow a standard resolution logic.

Something like this Import {app} from file './App' Import('./App', file)

Import {sin} from builtn '@std/math'

It can be ignored but the it will follow some order like: import maps, npm modules, file, builtin.

isaacs commented 5 years ago

@devsnek

if an implementer feels pressured to support the whims of an entity external to the specification in order to have a successful implementation, I think the specification has failed.

Who's talking about the "whims of an external entity"? If an implementer does not recognize the code currently in use by their potential users, then they yes, will not be as successful. No specification choice will change that, it's basic developer product design.

Like you yourself said, if blinker doesn't deal with the existence of npm (and others) and avoid these issues, they face problems down the road.

I did not say that. This is a mischaracterization of my position. I said that if blinker doesn't deal with the existence of userland code in the blinker namespace, then they'll face challenges gaining adoption.

Are you actually suggesting that a successful specification would mean that platform implementors don't have to consider what code their potential users are already writing?

Pauan commented 5 years ago

If an implementer does not recognize the code currently in use by their potential users, then they yes, will not be as successful. No specification choice will change that, it's basic developer product design.

(Note: I am not @devsnek, I'm not speaking on his behalf)

If the specification is designed in such a way that existing user code cannot conflict with the new features, then it works just fine. This has been successfully done many times in the past.

And there have been multiple suggestions on how to implement the JS stdlib in such a way that it doesn't conflict with user code.

Are you actually suggesting that a successful specification would mean that platform implementors don't have to consider what code their potential users are already writing?

When adding new features? Yes.

When new features are added to JavaScript, it's done in such a way that it cannot impact existing code. This is a core part of 1JS and avoiding breaking the web.

And when it hasn't been designed that way, it's caused problems, which then required workarounds like @@unscopables.

I understand Node does things differently, and that's fine. But the way that browser JS has done things is also fine. It's done that way for good reasons.

Any solution must work in all environments, both the browser and Node (and others).

Let's try to look at all the perspectives and options, and not dismiss legitimate concerns by calling them FUD.

isaacs commented 5 years ago

@Pauan

Let's try to look at all the perspectives and options, and not dismiss legitimate concerns by calling them FUD.

When I say "fud", I am not casually dismissing legitimate concerns. I'm using that term to specifically refer to the fear, uncertainty, and doubt around using a "npm-style" namespace, simply because npm is a company. Nothing in the proposal is npm-specific. It's the way that people identify and recognize namespaces today, and there are numerous benefits to following that existing cowpath.

I guess, what I'm saying is, I'm dismissing (some of) the concerns, because they are not legitimate.

When new features are added to JavaScript, it's done in such a way that it cannot impact existing code.

There is no approach which guarantees that userland code will never conflict with a runtime's chosen namespace. Qv: Babel's implementation of import. I guarantee you that people would be writing import {readFile} from native 'fs' in no time, because transpilation. We are already in that place.

That being said, of course import {foo} from '@bar/baz' is easier to polyfill than import {foo} from \\polywog '::bar::baz', but my argument is that that's a good thing, not a bad thing. The costs are minor, and the benefits are high.

littledan commented 5 years ago

if blinker doesn't deal with the existence of userland code in the blinker namespace, then they'll face challenges gaining adoption.

Isn't this a potential issue if we allow polyfilling of built-in modules at all, regardless of the specifier syntax?

Pauan commented 5 years ago

When I say "fud", I am not casually dismissing legitimate concerns. I'm using that term to specifically refer to the fear, uncertainty, and doubt around using a "npm-style" namespace, simply because npm is a company. Nothing in the proposal is npm-specific.

Fair enough. My concern has nothing to do with npm as a company, it has to do with cleanly separating user and language features, matching the user's expectations.

There is no approach which guarantees that userland code will never conflict with a runtime's chosen namespace. Qv: Babel's implementation of import. I guarantee you that people would be writing import {readFile} from native 'fs' in no time, because transpilation. We are already in that place.

There's a big difference: with your example, that's a polyfill, which is expected to behave according to the spec (and if it doesn't, that's a bug which gets fixed).

On the other hand, using npm-style namespaces conflicts with completely arbitrary user code, which might not behave according to the spec at all. That has significant practical ramifications.

That being said, of course import {foo} from '@bar/baz' is easier to polyfill than import {foo} from \polywog '::bar::baz', but my argument is that that's a good thing, not a bad thing. The costs are minor, and the benefits are high.

I don't really buy that argument. With npm-style namespaces, you have to do npm install @bar/baz, and manage versions, which is a pretty weird way to do polyfills (people don't do npm install fs to install an fs polyfill).

Instead, polyfills would be provided by some sort of plugin (whether that be Babel, Webpack, import maps, whatever). In other words, a mechanism completely outside of the npm package manager. That's how things are done today, and I see no reason why that would change.

Pauan commented 5 years ago

@littledan Isn't this a potential issue if we allow polyfilling of built-in modules at all, regardless of the specifier syntax?

My assumption is that it shouldn't be possible to create an npm package name which contains std:, so there's no possibility of conflict.

In other words, importing std:blinker isn't the same as importing blinker or @blinker, so there's no confusion or conflict, and thus adoption shouldn't be hindered either.

Obviously an application can use import maps (or a Webpack plugin, or whatever) to arbitrarily assign std modules to anything they want, but that's controlled by the application author, not the library author, so there's still no conflict.

littledan commented 5 years ago

It sounds like this actually comes down to how polyfills for built-in modules are distributed and deployed, and how much we want this to be by the library author vs importer. I think this is a somewhat different issue--I can imagine many possibilities here. It just seems like a separate question. For one, we probably want a way to distribute/use polyfills which are not just the ones approved by the owner of the scope.

ljharb commented 5 years ago

You already can add "@standard/somethingNew": "github URL to alternative repo", for example, to package.json, so "control of the scope" doesn't block anyone from being installed under that name (except via the npm registry itself, of course).

mcollina commented 5 years ago

Before thinking about the syntax, I think it's best to agree on some of the requirements/open questions that needs to be discussed first:

  1. Can the runtime augment the standard library? Or is the standard library only developed/standardized at TC39?
  2. Will there be a "web" library as well, with web specific modules standardized at WHATWG?
  3. Will there be a runtime specific ("chrome", "firefox" or "node") namespace?

I think that answering those questions would help clarify and decide one way or another.

glen-84 commented 5 years ago

@mcollina It's probably best to create separate issues for those questions (if they don't already exist).

littledan commented 5 years ago

These issues are already widely under discussion:

  1. Many people are treating this capability as a requirement. See https://github.com/tc39/proposal-javascript-standard-library/issues/2
  2. Yes, this is under discussion, see https://github.com/domenic/async-local-storage and https://github.com/valdrinkoshi/virtual-scroller
  3. I haven't heard ideas about browser-specific features--i think everyone came away from the previous prefixed approach with the conclusion that it was a bad idea, and browsers should ship features unprefixed and headed towards an eventual standards track. But I imagine Node.js may continue to add its own built-in modules.
mcollina commented 5 years ago

Many people are treating this capability as a requirement. See #2

I think we are talking as something different here. As an example, who owns the standard namespace and the API definition in it? As an example, Node.js could add its own stream implementation under that namespace. WHATWG could recommend the browser vendors to do the same for WHATWG streams. Then we have a name clash.

I highly recommend to keep the namespace of the standard library to TC39, and enable WHATWG to have its own namespace that it governs. If every platform and runtime can change and extend that standard library, then it's not a standard library anymore.

Shipping features unprefixed is going to create a possible name clash situations. I think it would be best for the JS language to provide a way for runtimes/platform to avoid such name clash. Or we could disregard the name clash issue completely and push it to the community.

https://github.com/domenic/async-local-storage is 100% a web specific feature that makes sense only in browser-like environments. Should it go in the standard library? I don't think so. I think the standard library should focus on the language primitives and low-level data structures.

The standard library proposal should enable runtimes to maintain their own namespace, that they can use at will. Something like:

import { Instant } from std 'temporal';
// or
import { Instant } from '@std/temporal';
import { storage } from web "async-local-storage";
// or
import { storage } from "@web/async-local-storage";

Note that using the @std/ and @web/ prefixes work the same.

I haven't heard ideas about browser-specific features--i think everyone came away from the previous prefixed approach with the conclusion that it was a bad idea, and browsers should ship features unprefixed and headed towards an eventual standards track. But I imagine Node.js may continue to add its own built-in modules.

Who owns the namespace? If Browsers or Node.js start overriding part of it, we are fracturing the community and the ecosystem, and create friction where there should not be. I think runtimes should put their native modules under different namespaces.

annevk commented 5 years ago

Note that if you divide the namespace that way it'll be much harder for the next ArrayBuffer, streams, TextEncoder, or URL API to be used across host languages. Or perhaps it's considered acceptable that there's eventual duplication?

glen-84 commented 5 years ago

I prefer std and web to the actual names of standards organizations. Userland developers should not need to know what tc39 and whatwg refers to (and it might even change?).