Closed SMotaal closed 3 years ago
I can dig it. It makes sense to me :)
Is this issue saying that all builtin modules should become node:
internally? If so, that's already something we do.
@devsnek — this largely aligns with that (I noticed this a while back but have not checked lately)
I think it takes a little more than the internal reality though moving forward — to align with ecosystem wide shifts/proposals.
So this is more of a call to explore and more concretely define such aspects.
@devsnek What do you think a more complete spec for this scheme/protocol being a way to move forward on open issues elsewhere, maybe even vendor modules?
Such a protocol would concretely specify possible behaviours like the example above:
import 'process' // in ~/app.js
resolve('process', 'node:[no idea yet]/app.js')
to extending usual resolution behaviours for a specifier and its referrer.There is no firm opinions on what that spec needs to say at this point, but imho having thought of this on many occasions over the span of two years now, how platforms handle this critical but often punted detail now will set the stage for interoperability stories of modules for years to come.
The goal here cannot exclude portability, without that leading to actually excluding portability altogether!
I’m still really confused by this issue. All protocols are in the same bucket - the owner of the protocol decides if it makes a network request or hits the filesystem or both or neither. You can use a protocol as a namespace since namespacing is just a concept, and I’m not sure I’ve seen the term “scheme” used in this context except also as a generic concept.
Can you restate the purpose of the OP?
@ljharb I certainly agree with you — I decide node:
behaves like a namespace, and I want that to still resolve relative paths like const internalProcess = await import(new URL('./internal/process', import.meta.url))
and so for that simplicity to translate, you want to have the native URL recognize this scheme as a standard scheme — ie follows similar mechanisms as file:
or http:
.
The purpose of this OP, is that I think that the time is right now to say that we need to align across venues where such scheme-as-a-namespace or @scope-as-a-namespace will forever affect portability of ECMAScript modules.
I am not here to say I have the answers, more like with everyone (others all being far more qualified than me on many fronts) — I'd like us to write a cohesive story with a little less emoting and more dialogue.
Maybe we should look at loader requirements/patterns for ESM and that might sway things. While CJS does not parser URLs, ESM does. The current ESM hooks require loaders to return valid URLs. As pointless out above, internally we already have a node: scheme in place so we can expand builtins to valid URLs. If we use @node/
a prefix it would be converted to a valid URL still. A custom scheme would be used because of not being aligned with any other ones (not a file not http not email etc.). I don't think this issue is really about just specifiers inside import/require as a loader wishing to return fs would need to still use the custom URL if we want to make loaders have an API with a single return type for the resulting specifier. Making loaders expose their already internal representation for userland usage seems same, and avoids creating "node:@node/fs" from being a double encoding of sorts to get to the real fs from a loader.
A custom scheme would be used because of not being aligned with any other ones (not a file not http not email etc.)
@bmeck can we take this a little slower (for my benefit and maybe others) we say custom scheme here, are we talking from the perspective of the runtime's URL
constructor of the environment, right?
specifiers inside import/require as a loader wishing to return fs would need to still use the custom URL
I am not sure I follow exactly, maybe a bit though, so I think we need to work on a few examples (maybe gists) to ~come to work through and~ understand the possibilities more closely.
Making loaders expose their already internal representation
I don't think it is necessarily the outcome here, they could be separate — I'm thinking about compartmentalized module keys having potentially more than one mapped identifier in nested realms.
@bmeck… Can you propose a way to structure some efforts around what you've stated please? I am sure I'd want to do some work here (others too as well).
@smotaal
A variety of things happen for the non-special schemes including that same behavior of non-relative URLs such as with data: , blob: , std: etc.
See
console.log(new URL('./a', 'data:text/html;'));
blob_url = URL.createObjectURL(new Blob([]))
console.log(new URL('./a', blob_url));
The behavior does not mean that browsers cannot handle the scheme, just it isn't special cased like http and file.
I'm not sure I understand the request for structuring efforts. I'm mostly just looking at how the current loaders already use a scheme like @devsnek points out, and how if we do something else we likely still will be using a custom scheme internally so why not just expose it rather than having an internal vs user facing representation. There isn't really effort here to be had.
An example of why loaders care is to ensure we can have user provided loaders always point to the builtins. Doing so requires a well known string for the built-in, and loaders as they exist currently work on URLs so they want a valid URL.
@bmeck I am thinking a little less emphasis on what node needs to do so that node-specific code works for cases of builtins only… There is more to explore in considering this direction.
We have package exports, we also have the legacy resolution protocols and experimental ones (regardless of if they were meant to be considered in this more formal capacity) — all of which are things that may or may not be addressed here.
Buy-in from all the players comes in the form of equal opportunity relative to unique complexities and requirements — because if we have a really ideal node:
, js:
, or std:
in isolation, any JS developer will be at the mercy of rewriting, unless importmaps
are supported (but that is not true innate isomorphism), but if not that then…etc. and all those novel ways are cool, but decently being portable without knowing which one here makes them all potentially useless.
Few things are truly portable tho, because every environment provides different privileges around things like fs/network/timing/threading/etc.
@ljhard… true — so the example I think of if I get this right, I use importmap
(or similar) in a contrived future where I map fs
to node:fs|std:fs|undefined
…
What I am trying to get at is to distinguish specifier portability from portable code — a portable specifier is one that does not unintentionally point to the wrong module.
In the first two scenarios, we assume only that a platform-specific module ~key~ identifier must only ever lead to the outcome we expect — ie if a different platform not a browser and not bound to std:
specs of the browsers decides to make std:fs
mean something completely different semantics wise, then ECMAScript imho is failing us here, not the browsers nor the implementer who obviously pushed things just to make a point here.
So the point here is that while each prefix and platform can design their own schemes and protocols, ECMAScript specifier behaviours that dictate a node:
prefix or std:
prefix must either:
And with this guarantee, you know that if a platform does not support node:
or std:
it will need to support undefined
and there are many views on what could be — imho actually undefined
is the only one that would be a really bad idea. My thinking is the undefined module is a specifier-less module that in very specific cases is the fallback for which the set of any imported name binds to undefined and *
binds to the single empty namespace instance.
I'm a bit confused by this thread to be completely honest. Something like nodejs:@nodejs/fs
is extremely verbose and seems like the worst of both worlds. I don't see why our decision to use node:
internally should have any effect on the discussion. This is not an external API and we can freely change it internally without any issues... if we choose to expose it extrenally that is great, if we choose to have a different mechanism externally great... it seems like considering it in the discussion is the carriage driving the horse.
I don't see this as schemes vs namespaces but rather schemes as namespaces. This is the direction the ecosystem is moving... which means that at some point in the future we are going to need to support js:builtin
... and if node has a different mechanism I think that will be quite odd.
I'm strongly -1 for @nodejs/
at this point.
What is the desired outcome of this thread?
I am not doing a good job addressing the same questions immediately here… I need to step away and get back to this most likely for next meeting.
Do what the web is doing... use std:
for native core node modules stuff so that if/whenever you implement similar stuff like the kv-storage then it will be seemless to use it without any platform specific code.
support importing from urls too, that how the web & deno works also
for npm stuff... i think it should start serving as a cdn, node should cache the file locally (like deno) and use it from there on
importing npm modules from the web dose not work...
so i would not want to see any @nodejs
or 'node:@nodejs
or anything starting with npm either for that mather. stick to the web specification.
std:x
would be nice too see
The web is no longer doing that as part of import maps. Separately, “std” is not a good name from a bikeshedding perspective, and it’s not what TC39 would be going with either.
@jimmywarting I wanted to ask if you've rolled out code with std:kv-storage
not necessarily production but at least with fallback behaviours for SF/FF (evergreen).
@jimmywarting I wanted to ask if you've rolled out code with std:kv-storage not necessarily production but at least with fallback behaviours for SF/FF (evergreen).
No, i haven't, have not even tried kv-storage.
didn't know TC39 stop using it either, i just want stuff to be backed up by some specification instead of inveting something that later becomes node specific not being cross deno/web/node compatible
Fwiw I'm working closely with the champions of the built-ins proposal at tc39 and will be working to ensure what we do in node aligns with larger web platform
On Mon, Nov 4, 2019, 6:29 AM Jimmy Wärting notifications@github.com wrote:
@jimmywarting https://github.com/jimmywarting I wanted to ask if you've rolled out code with std:kv-storage not necessarily production but at least with fallback behaviours for SF/FF (evergreen).
No, i haven't, have not even tried kv-storage.
didn't know TC39 stop using it either, i just want stuff to be backed up by some specification instead of inveting something that later becomes node specific or not cross deno/web/node compatible
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nodejs/modules/issues/347?email_source=notifications&email_token=AADZYVZPZIQNEFYUVDFAPKLQSABRDA5CNFSM4H5WVAB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC654AY#issuecomment-549314051, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADZYV5UFFAJDZNJZ27HZSLQSABRDANCNFSM4H5WVABQ .
I'm not 100% sure but if this thread is talking about node namespacing imports, discussion should probably go here: https://github.com/nodejs/node/pull/21551
Just to clarify, those two threads are related, this one was revived from older issues opened in the Modules repo.
Closing as we have shipped the node: scheme
A while back in #222 and #169 we talked about the idea of having some prefix.
Recent discussions about prefixes have led to a polarizing debate between "prefix" and "namespace" notations — for things that are not necessarily the same thing.
file://…
if the implementation supports the scheme with the respective resource location and access protocols of that spec or people file bugs)https://…
if the implementation supports the scheme with the respective resource location and access protocols of that spec or people file bugs, and potentially lawsuits)mailto:
if the implementation supports the scheme — I don’t think a good implementation wants this to lead to a network request by the browser context itselfIf you have a scheme, it can have zero network-transport and yet an implementation can make it very very meaningful to one loader.
You see this with blobs today — they are memory things — they cannot extend beyond the lifespan of the context, and they even CSP so that they don’t see yours, browser context wise.
The Point
Can we not say the above is implicit? If in node the default scheme is
node:
then specifiers are all of that scheme unless they are not.What about this:
Is it not just like:
Would that not lead to a balance between all things we care about ecosystem wise and still balance with newer notions others are exploring a decade later.
Thoughts?