Closed arendjr closed 2 years ago
We don't rewrite target-legal JavaScript code to be different JavaScript code from what it started with, no matter the configuration.
See also https://github.com/microsoft/TypeScript/issues/16577#issuecomment-754941937
I’m sorry, but I cannot understand why you would label this issue as Out of scope or why you would dismiss this based on a comment from almost one and a half year ago.
You argue “we don’t rewrite target-legal JavaScript code”. But this argument does not apply. If someone wrote import { myFunction, type MyType } from “./myFile.ts”
, which I believe to be a very reasonable expectation from a user’s point of view, and you would emit that with the “.ts” extension intact, the resulting code would be invalid because that target doesn’t exist. You are building the compiler, you know that target doesn’t exist. So you wouldn’t be rewriting valid code, you would be rewriting code you know would be invalid if you didn’t rewrite it. I hope you’re not arguing it’s better to let the user shoot themselves in the foot.
Furthermore, the comment you linked depends on the arguments presented in yet another comment: https://github.com/microsoft/TypeScript/issues/16577#issuecomment-703190339
If you read the arguments there, this is clearly a different situation than what we are facing today. My issue is with the change in the new module resolution for ESM introduced in TypeScript 4.7. This change forces users to specify the .js
extension. Compare this to the “facts” presented in the comment I linked, which clearly apply to a situation in which the user is free to choose whether they want to include the extension or not. This is not the situation as it is with the ESM resolution in TypeScript 4.7.
Back then, users could improve compatibility by adding the extension, but the way they are now forced to use it compromises compatibility as I attempted to explain, but none of which you addressed, and none of which any of the linked comments address.
As such I would kindly ask you to reconsider the implications of the new module resolution, including the impact on developers and the wider ecosystem.
I don't think there's anything new to discuss here; the strong line we've drawn in the sand is that we do not rewrite import paths, ever. Our design principle, consistently, is that you write the import path that you want to appear in your JS file, and configure TS to tell it how to turn that path into a build-time path. Doing it differently is out of scope for our project.
This is a hard constraint per design goal number 7, "Preserve runtime behavior of all JavaScript code.". import { foo } from "./x"
is JavaScript code. Granted import { foo, type bar } ...
is not, but I think it's clearly way too confusing to have one runtime path appear if you put the type
keyword on a single named import and a different path if you don't.
Nothing about node16 resolution changes this. Our principle even would be the same if some new module resolution system appeared where you wrote the sha256 hash of the file you wanted -- we would have you write import foo from "ba932c1fd3..."
and then have a system to map that to foo.ts
, not have you wrote import foo from "foo.ts"
and then emit some JS other than what you wrote.
If you have some environment where you sometimes need to have JS that says import foo from "blah.js"
and sometimes need to have JS that says import foo from "blah"
, that is also out of scope for our project, the same as if you needed the same input line of TypeScript to sometimes emit 3 * 4
and sometimes emit 3 + 4
. There are tools to handle these problems and it's out of scope for us to re-implement solutions to those problems.
There are surely difficulties in navigating the module ecosystem, no one can deny that. What I'm saying is, rewriting import paths is not a solution we consider to be in-scope. We're always interesting in hearing ideas that can simplify the module resolution process and improve the user experience as long as it fits within our design criteria of not rewriting import paths.
Thanks for your reply. I understand I’m getting at some pretty fundamental issues here, so I try to navigate this as delicately as I can.
First, the new thing I mentioned is the node16
module resolution mechanism that comes with TypeScript 4.7. Unlike previous module resolutions supported by Typescript, this forces users to use the .js
extension in their paths. While this may be a consequence of decisions that were made earlier, this does exacerbate the issues that result from those decisions. If nothing else, I believe that may present a valuable moment to reflect on those decisions to see if they still achieve the goals you set out to achieve.
I also believe the timing is important here. So far, and as far as I’m aware, Typescript users have mostly avoided including extensions in their paths. And for good reason: Including them makes their code less compatible with existing bundlers used in the ecosystem. One notable exception here are Deno users, who include the .ts
extension, in a resolution mechanism that follows the ESM resolution in spirit.
If TypeScript 4.7 is released with the node16
setting as it’s implemented today, I’m afraid this may introduce further schism between the various user groups. This is unfortunate, because node16
is presented as the way to do ESM with TypeScript, but for users to use it they will be forced to move away from their current practices in a manner that is incompatible with their current tools. It will become harder for a single codebase to support Node.js and bundlers, while there is no technical reason for this to be so.
So while there’s no technical reason for this outcome you admit to be unfortunate, you do present one based on principle: You say you “do not rewrite import paths, ever” and back this up with the design goal that says to "Preserve runtime behavior of all JavaScript code."
So let’s reflect on this. Because if these principles/design goals will inevitably lead us to face such an unfortunate outcome, is it possible the design goal itself is in need for reconsideration? I would not necessarily go that far, but I do think your proverbial line in the sand to “not rewrite import paths, ever” is an unnecessarily strict interpretation of the design goal.
Let’s revisit my original request, and let’s simplify it to focus on the core issue:
import { myFunction } from “./myFile.ts”
Is this valid JavaScript code? It’s certainly syntactically valid. But if “./myFile.ts” does not exist or contains syntax that is not valid JS, it’s certainly not functional JavaScript code. Does it make sense to wish to preserve behavior of non-functional code?
But let’s take it a step further: How is this code supposed to behave? Again I will refer to Deno, because it defines actual runtime behavior for this that is inline with the ESM specification in spirit. But here’s the rub: Deno provides a TypeScript runtime, not merely a JavaScript one. So this code is valid TypeScript code, even if it is not functional JavaScript code.
Let’s look at that design goal one more time: "Preserve runtime behavior of all JavaScript code."
The behavior is clear. Just run it in Deno and see. But interestingly, it is the TypeScript code emit that breaks this behavior! Because after the code emit, the file it referred to is no longer there.
I know you will defend this by saying the path is intended to be an output path, but how does that follow from your design goal to not break behavior? If anything, this intention feels more like a post hoc rationalization to defend the “do not rewrite import paths, ever” position than it seems a useful part of the design. It’s certainly not a design goal.
Now, I realize there are no easy answers here. But from my perspective, the design goal to "Preserve runtime behavior of all JavaScript code" would be served by rewriting import paths if the behavior would otherwise be broken by your own emit process.
The statement import { myFunction } from “./myFile.ts”
is valid TypeScript (even functional JavaScript under very strict circumstances) whose behavior would be broken if you do not rewrite the path during its emit phase.
While it is your prerogative to define the scope of your project, in my opinion it is a stretch to claim an issue introduced by your own process (it’s inevitable that paths break when you write files to a new location with new names) as out of scope.
Finally, I would like to come back to the hypothetical example you raised:
Our principle even would be the same if some new module resolution system appeared where you wrote the sha256 hash of the file you wanted -- we would have you write
import foo from "ba932c1fd3..."
and then have a system to map that to foo.ts, not have you wroteimport foo from "foo.ts"
and then emit some JS other than what you wrote.
What I find interesting here is that you say you’re willing to build a system to do the mapping, but you’re not willing to help the user writing the code. It seems you hold stronger to the notion of not rewriting import paths than to preservation of behavior or the notion of creating value for your users. We both have the same design goal in mind (to maintain the behavior of the code), but to me this shows your “do not rewrite import paths, ever” position is untenable and it feels not in line with what I believe would be the best interests of the project.
I hope I have not offended with this post. I raised this issue because I believe it would be harmful to the TypeScript community if segments of the community have to use incompatible module resolution schemes, which the node16
resolution as it stands further forces users towards. Instead, I hope that through a relatively simple change we can make people’s code more easily interoperable. Even if that change means a more fundamental reinterpretation of the original design goals.
As I fear this conversation might otherwise go entirely stale, I would like to offer one more argument as to why I think the “do not rewrite import paths, ever” position is not just undesirable, but actually runs counter to your own design goals.
Let’s look at another variation of the example we discussed so far:
import { myFunction, MyType } from “./myFile.ts”
Let’s assume MyType
is not a class, but a TS type that gets erased during emit. What would the output be? It would become:
import { myFunction } from “./myFile.ts”
That’s interesting, isn’t it? The original snippet was 100% syntactically correct JavaScript. And yet it was rewritten as part of the emit process.
Of course rewriting it was the correct thing to do. Without rewriting, the emit process would have yielded output the JavaScript runtime would have trouble resolving. So we must rewrite JavaScript in order to preserve behavior.
Please explain to me how the import path is different. Sticking to “do not rewrite import paths, ever” is leading to output the JavaScript runtime will have trouble resolving — the exact same behavioral issues that you are willing to solve elsewhere. Issues you must solve in order to adhere to your own design goal of preserving behavior.
Of course you could argue that MyType
refers to a TypeScript type, so you actually rewrote TypeScript rather than JavaScript. But in that case, please explain to me how the import path is different. “./myFile.ts”
refers to a TypeScript file, so you would rewrite TypeScript rather than JavaScript.
Rewriting paths to TypeScript files in order to preserve behavior is compatible with your original design goal. To “not rewrite import paths, ever” is not.
It feels like many of these problems would be solved if TS enforced the use of file extensions in import/export paths. Not .js
but the actual extension, eg .ts
or .tsx
.
Then it's up to the transpilation process to output code compliant with the specified module and target. If that means changing .ts
to .js
then so be it.
(What if tsc
eventually gets support for a --out-file-extension=.mjs
flag. Then those .js
in the paths won't work anymore.)
Technical aside, this issue make the DX worse for everyone. It doesn't make any sense for relative import to refer to a non-existent .js
file. On the top of my hat, to circumvent this problem, the developer need to:
Deno
, ts-loader
, ...)I think some of the issue stems from this
is that you write the import path that you want to appear in your JS file, and configure TS to tell it how to turn that path into a build-time path.
as a user I see the typescript compiler taking valid TS code from the “src” realm and converting it to valid JS code in the “build” realm.
When I first heard about the “.js” extension it broke what I understood to be the separation between “src” & “build” - I wondered if I’d need to supply the full path to the output .js in the source file, or otherwise adjust it based on my knowledge of where the .js file would end up
If only there was a way to put this to vote!
@RyanCavanaugh’s argument, which I believe represents that of the TypeScript team, doesn’t seem sustainable on the long run. If a community of other engineers says this is what they think is best, then it better be given a strong consideration, rather than being labeled “out of scope” just for the sake of wanting to be conservative. Except TypeScript was made only for the TypeScript team.
The system is changing! If need be that some principles are reviewed as the system changes, to better support and adapt these changes then so be it. Otherwise the system shouldn’t change and fundamental principles be kept.
Moreover, @arendjr’s arguments in summary has not even changed the principles it has only broadened the short-sighted interpretations and undue strictness of these principles.
Oh and I think it is being put to vote already seeing how many reactions are yet in favor of @arendjr’s opinion.
I hope the TS team reevaluates their stance, the points made in the OP definitely make a lot of sense
Technical aside, this issue make the DX worse for everyone. It doesn't make any sense for relative import to refer to a non-existent
.js
file.
As @s123121 said, the DX currently is horrible. If you're using anything else other than TSC for transpiling *.ts files, node16
is unusable, because other tools (e.g. Parcel, Jest, etc.) don't understand why are you trying to import non-existent file.
Another argument about file extension:
The purpose of the file extension is to explicitly state which file to resolve to.
In type: module
, the CommonJS should have extension .cjs
, while ESM have extension .mjs
Which means, if I am writing the code in TypeScript, and have two tsconfig
to create those two outputs, then by definition tsc
must rewrite the file path during compile time, regardless if I specify the file extension in TypeScript, and if I specify it as .ts
, .cts
, .mts
, .js
, .cjs
, or .mjs
.
Currently, the following is still not possible for TypeScript packages:
"exports": {
"import": "./esm/index.mjs",
"require": "./cjs/index.cjs"
}
For anyone who stumbled upon this, if you want .cjs
output files, use .cts
input files, likewise if you want .mjs
output files, use .mts
input files. We've landed in a very nice place where there is a 1:1 mapping between input file extension and output file extension (excepting tsx), which is very very helpful for a number of aspects in tooling (like actually being able to find declaration files for arbitrary code without extra configuration).
Which means, if I am writing the code in TypeScript, and have two tsconfig to create those two outputs, then by definition tsc must rewrite the file path during compile time,
Keep your two tsconfig
s with two outDir
s, use .js
extensions everywhere, in addition to a root package.json
with type: module
, put a sub-package.json
in your esm
output folder specifying type: module
, and one in your cjs
output folder specifying type: commonjs
. Run your cjs build with module: commonjs
, and your esm build with module: nodenext
. No extension rewriting required, .js
everywhere. The two separate builds are important because they're highly liable to typecheck differently - the esm and cjs modules will import different things (this is a feature in node
, not our behavior), and writing a single source codebase that typechecks in both is actually incredibly difficult in practice, so I wouldn't recommend it. It's much easier to write a single source (mostly cjs but using module: nodenext
) module and hand-write specific cjs and esm entrypoints as needed (or only write esm).
Keep your two tsconfigs with two outDirs, use .js extensions everywhere, in addition to a root package.json with type: module, put a sub-package.json in your esm output folder specifying type: module, and one in your cjs output folder specifying type: commonjs
Um... that doesn't work in practice, AFAIK.
I was planning to migrate my packages to type: module
.
Originally, I was planning to have a clean cut of migrating from CommonJS
to ESM
.
i.e., from:
{
"main": "./lib/index.js"
}
to:
{
"exports": {
"import": "./lib/index.js"
}
}
I completely subscribe to that idea, and understand that is the way to go as:
the esm and cjs modules will import different things
But what I have found is that when I publish the package like that, it breaks everywhere.
Because of the interconnected dependencies and the lovely node module resolution algorithms.
So the migration needs to be done in two phases:
First, migrate to:
{
"exports": {
"import": "./esm/index.js",
"require": "./cjs/index.js"
},
"main": "./cjs/index.js"
}
And make sure everything works (I have 100+ open source packages).
Then, they can be migrated to ESM only packages.
Having two package.json
doesn't really work, when you are talking about publishing to NPM,
unless you abundant the original package.
i.e. package-a
-> package-a-cjs
and package-a-esm
.
Having two package.json doesn't really work, when you are talking about publishing to NPM, unless you abundant the original package.
Nested package.json files within a package work fine? The sub-ones doesn't even have to contain anything other than the type
field to provide resolution metadata for node itself. They're not for declaring two separate packages - they're for declaring the format for the .js files within those folders. There's still a single top level package.json for the package itself.
And make sure everything works (I have 100+ open source packages).
Then, they can be migrated to ESM only packages.
Dropping cjs support is still definitely going to be a major-version-bump change, even if adding dedicated esm entrypoints can be done in a minor.
There's still a single top level package.json for the package itself.
That's interesting. Maybe something I need to look into.
Dropping cjs support is still definitely going to be a major-version-bump change, even if adding dedicated esm entrypoints can be done in a minor.
Yes, definitely.
@weswigham Could you please elaborate on the Duplicate label you added? I see no reference to any issue this would be a duplicate of, nor am I aware of any such issue. Of course the generic topic of rewriting import paths has been discussed before, but certainly not in this context.
As an aside, I sorted your issues by number of upvotes and it appears that in three weeks time, this issue has become (with a wide margin) the most upvoted issue of the past year. If this were truly a duplicate, it seems the original was not resolved well.
Of course the generic topic of rewriting import paths has been discussed before, but certainly not in this context.
Duplicate of #16577 - "this context" doesn't really meaningfully change anything - if anything it makes rewriting extensions even less appealing by the logic we've stated before, since now we have multiple input extensions that already mean multiple output extensions.
We're not trying to somehow make node's new behaviors more paletable for people - that's node's fault, not ours. Our philosophy has been, and will be, that we don't semantically change your code during emit, and rewriting imports most certainly is a semantic change (a semantic change that, as I've stated elsewhere, we could never get right 100% of the time thanks to dynamic import operations).
If you really want omitting extensions to be valid node esm, (as it is in cjs) I suggest a complaint about the new node behavior to require extensions over at nodejs/modules#323 or on the node repo proper, rather than here, which is pretty much invisible to the node maintainers - as whatever node supports, we will follow suit.
@weswigham I'm sorry, but it appears you do not entirely understand what we're asking for here.
Our philosophy has been, and will be, that we don't semantically change your code during emit, and rewriting imports most certainly is a semantic change
If that it is truly your philosophy, you've surely been very selective at following it. Rewriting ESM to CommonJS by itself is a semantic change, yet you have no philosophical issues implementing it. The esModuleInterop
configuration option semantically alters how the code should be generated, yet you have no qualms with that.
(a semantic change that, as I've stated elsewhere, we could never get right 100% of the time thanks to dynamic import operations)
We're not asking for dynamic imports to be supported here. Raising that as an issue is a strawman argument, because dynamic imports themselves are fundamentally different from static imports, and we can understand those to be out of scope. Hence why that's not what's being asked for here.
If you really want omitting extensions to be valid node esm
Once more, I'm not asking for the omission of extensions, I'm asking for the rewrite of extensions in the exact way that is compatible with how your build process already rewrites extensions. This is not a duplicate of the issue you reference, it's a different request.
If you really want omitting extensions to be valid node esm
I'm not sure anyone is asking for this. We're talking typescript and at least I don't consider my TS code to be valid esm/cjs until it's transpiled. Sorry deno land folks (although I would expect .ts
to work).
// utils.ts
export const sum = (a: number, b: number) => {
return a + b;
};
// index.cts
import { sum } from './utils';
console.log(`CJS 1+2=${sum(1, 2)}`);
I'm not considering index.cts
to be valid cjs until it's transpiled and obviously it's not since it doesn't use require
. But still it works and transpiles properly.
// index.ts
import { sum } from './utils';
console.log(`CJS 1+2=${sum(1, 2)}`);
Just the same, I would expect
tsc --module CommonJS --outDir dist/cjs src/index.ts
to output valid commonjs, and
tsc --module NodeNext --outDir dist/esm src/index.ts
to output valid esm.
And I would gladly add .ts
extensions if that made things less ambiguous.
Also it's quite unintuitive, or confusing, that package.json#type
overrides/changes the behavior of tsconfig.json#compilerOptions.module
. I thought there was a bug in tsc
until I realized I had to use type = module
to get anything other than commonjs (4.7.x). Byt type = module
breaks lots of other stuff in my projects, so not really ready to change that just yet.
Want to share with you why asking TypeScript to emit the right file extension is critical:
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\code\unional\standard-log\node_modules\type-plus\cjs\index.js from D:\code\unional\standard-log\packages\log\lib\captureLogs.js not supported.
index.js is treated as an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which declares all .js files in that package scope as ES modules.
Instead rename index.js to end in .cjs, change the requiring code to use dynamic import() which is available in all CommonJS modules, or change "type": "module" to "type": "commonjs" in D:\code\unional\standard-log\node_modules\type-plus\package.json to treat all .js files as CommonJS (using .mjs for all ES modules instead).
I have updated type-plus
to type: module
with dual support. i.e.:
{
"type": "module",
"exports": {
"import": "./esm/index.js",
"require": "./cjs/index.js"
},
"main": "./cjs/index.js"
}
Just like how it is mentioned in TypeScript 4.7 Beta Announcement
As you can see, when it is referenced from CJS code two levels deep, it doesn't work.
What happen here is this:
standard-log
(doesn't matter if it is CJS or ESM)standard-log
is CJSrequire(''type-plus')
node_modules\type-plus\cjs\index.js
.js
so NodeJS do not know is it CJS or ESMpackage.json
(type-plus
's package.json
)package.json
has type: module
Cross reference: https://github.com/nodejs/node/issues/34515
(which in turn references the quoted #16577)
Which mentions @weswigham idea:
The sub-ones doesn't even have to contain anything other than the type field to provide resolution metadata for node itself.
$ mkdir ./cjs ./esm
$ echo '{"type":"commonjs"}' > cjs/package.json
$ echo '{"type":"module"}' > esm/package.json
$ git mv index-cjs.js cjs/index.js
$ git mv index-esm.js esm/index.js
But that means your build step now has to be something like this:
// pesudo package.json
{
"scripts": {
"build": "npm-run-all build:cjs build:esm",
"build:cjs": "tsc -p tsconfig.cjs.json && echo '{\"type\":\"commonjs\"}' > cjs/package.json",
"build:esm": "tsc -p tsconfig.esm.json && echo '{\"type\":\"module\"}' > esm/package.json"
}
}
where
// tsconfig.cjs.json
{
"compilerOptions": {
"outDir": "cjs"
}
}
// tsconfig.esm.json
{
"compilerOptions": {
"outDir": "esm"
}
}
I feel like less then 0.001% of people writing TypeScript know that they have to do this.
Also I just notice one thing.
If the "two folder, two sub package.json" is the recommend and only working solution,
what do we actually gain from Node16
and NodeNext
?
Isn't that we can just use module: ESNext
and moduleResolution: Node
and achieve the same thing?
With that, we don't need to add the phantom (or to-be-true-after-compile) .js
extension and things will just work.
But that means your build step now has to be something like this
I mean, it's easier to add the package files to the outdir once (and add them to version control - you can still gitignore the outdirs after comitting the package files so actual outputs aren't added by git add .
) rather than recreate them on every build.
While NodeJS supports ./cjs/package.json
: { "type": "commonjs" }
,
it is a workaround with significant runtime performance hit.
Now instead of NodeJS knows immediately the file should be parse as CommonJS
or Module
based on whether the file extension is .cjs
or .mjs
,
It has to do a find-up('package.json')
, open that file, read and parse it, to figure out how to parse the original file.
IMO it is critical for TypeScript to be able to output file with the right extension based on the tsconfig.json
.
I mean, it's easier to add the package files to the outdir once
btw, this still doesn't work well, because you should clean your outdir from time to time, e.g. when you delete, rename, or move the source file.
So you still need custom code to recreate it.
There is what I have to do:
{
"scripts": {
"clean": "rimraf cjs esm coverage lib libm && mkdir cjs && echo '{\"type\":\"commonjs\"}' > cjs/package.json || true",
}
}
So, I might as well create the cjs/package.json
after tsc -p tsconfig.cjs.json
. At least that is cleaner.
This issue has been marked as a 'Duplicate' and has seen no recent activity. It has been automatically closed for house-keeping purposes.
Based on the number of votes and activity on this issue, I don't this should have been closed.
@weswigham Would you mind reopening, please?
@unional wrote:
While NodeJS supports
./cjs/package.json
:{ "type": "commonjs" }
, it is a workaround with significant runtime performance hit.
This is actually a great argument, because it means the suggested work-around is in violation of design goal 3 which states that emitted code should not incur runtime overhead.
At a more fundamental level, there’s the issue that by forcing TypeScript users to use output paths, the language embeds the assumption that users will only ever use a single output target. We already realized this is undesirable if besides Node you also want to target Deno or bundlers, but now it seems it also prohibits compatibility between ESM and CommonJS outputs without performance overhead.
This is actually a great argument, because it means the suggested work-around is in violation of design goal 3 which states that emitted code should not incur runtime overhead.
It doesn't really create extra runtime overhead - the extra overhead was needing to probe the location for a package.json at all, which node has to do all the time anyway now, since it has to read the potential package for not just type, but policies, exports, and imports. You should feel free to use as many package.json
s as needed without worrying about the performance - node does a lot of caching on the lookup of them, as do we.
the language embeds the assumption that users will only ever use a single output target.
Multiple output targets mean multiple builds, simple as that. You gotta accept that if you want multiple outputs made from a single input, you're stuck using the common subset of features supported by each of those outputs, which, unfortunately, is a very small subset with node esm.
You gotta accept that if you want multiple outputs made from a single input, you're stuck using the common subset of features supported by each of those outputs, which, unfortunately, is a very small subset with node esm.
This is true, but the problem that everybody who attempts this keeps running into is that the extensions are incompatible. That is the main blocker that is making this subset so small. And all that is needed for fixing it is letting TypeScript officially support the .ts
extension so that all builds, all targets, and all tooling have a common frame of reference. By refusing to support this, you are responsible for causing this friction between various tools, as you are the only ones that can fix it.
That’s what this issue is about. It isn’t about asking TypeScript to magically convert paths from one target to another, or about conjuring up extensions that were omitted. It’s about recognizing the .ts
extension (same as #37582) and (within a very specific scope) rewriting the extension in the import paths from .ts
to exactly whatever extension your emit process writes the output file to, as you should in order to preserve behavior.
We got an issue on the Parcel repo about this and I was very surprised by the behavior here FWIW. Import specifiers always reference source files, not compiled ones. With any tool that doesn't compile files 1-1 like TSC (like bundlers, runtimes, linters, editors, etc.), the current behavior doesn't really make sense since the .js file you're forced to reference will never exist. In addition, it's possible that a source directory contain both a .ts and .js file of the same name, and it should be possible to reference both of these explicitly.
I don't think it's really possible for us to implement TypeScript's behavior here, and it's not great that the TypeScript language is imposing semantics on import specifiers. Build tools that compile files 1-1 may have to rewrite import specifiers from .ts to .js to preserve source semantics (you know what the output filenames will be), but this should not affect other tools supporting TS where the mapping of source semantics to target files is different.
I don't think it's really possible for us to implement TypeScript's behavior here
TypeScript has always used output paths for specifier resolution, since it first added initial node
support many years ago; if input path resolution has worked for a given tool layered over TS, it's been an artifact of the common case being that both the input and output paths are usually the same. What's new is that with esm, they're not usually the same nearly as often thanks to constraints in node itself.
Assuming parcel gets a tsconfig
to forward to TS itself, you can totally calculate the output paths and map back to the input paths. (And this is particularly important because imports refer to the JS outDir
in the tsconfig, while types will get spit out into the declarationDir
, and you'll need to know both mappings if resolving types for project references.) There's a pretty straightforward static mapping of extensions, too.
TypeScript has always used output paths for specifier resolution, since it first added initial node support many years ago
This is clearly not true, or at least a twisting of what actually occurred. If I write import './foo'
in my code, TypeScript resolves it to foo.ts
for purposes of resolving imported types, jump to definition, and everything else it does. It only happened that './foo'
was also a valid specifier in the compiled code, which Node and other tools would resolve to a .js
file (assuming there wasn't another file with the same name and a higher priority extension in the output directory). This is very different to saying the author intended to refer to a non-existent foo.js
file. If that were the case, then any types (e.g. interfaces) they imported wouldn't exist, jump to definition would go to a generated JS file rather than their source TS, etc.
The problem is, now you're saying that import './foo.js'
should also resolve to foo.ts
for all of these purposes, which just seems wrong. If I have a foo.js
file in my source directory, now I cannot access it.
TypeScript has always used output paths for specifier resolution, since it first added initial node support many years ago
I hope you see now, this was a mistake. Just because the mistake was made years ago, doesn’t make it right.
if input path resolution has worked for a given tool layered over TS, it's been an artifact of the common case being that both the input and output paths are usually the same.
I’m sorry, but this is just wilfully ignorant. It’s not just “a given tool”, it’s almost every tool in the TypeScript ecosystem that runs into this problem, and countless users — even those using your own compiler — that fall for this trap.
What's new is that with esm, they're not usually the same nearly as often thanks to constraints in node itself.
What’s new is that you could weasel your way out of this before, because users were able (for most use cases) to avoid the problem by omitting extensions entirely. But now you introduced the node16
in a way that makes this impossible, hence this issue.
Note this is not something you can blame on Node. Node only has to care about JavaScript sources, and for JavaScript users this problem does not exist, at least not to the same extent. But with TypeScript it becomes a problem, because TypeScript uses custom extensions, hence why it’s your problem if those extensions are not properly supported.
Assuming parcel (…)
And this where it keeps going wrong. Now you attempt to single out Parcel as if they are the ones doing something unreasonable, but it is almost every tool in your ecosystem that has to compensate for your shenanigans, regardless whether it is Parcel, Vite, SWC or Webpack’s ts-loader plugin, all struggle with this issue. Your refusal to accept responsibility for this issue is having a toxic effect on the community where every project that wishes to deal with TypeScript sources becomes forced to emulate how your compiler handles Node’s resolver resolution. This is not a feasible or acceptable solution for all of those projects.
I have attempted to explain this problem in many ways to you, @weswigham, @RyanCavanaugh and @andrewbranch, but it’s met with nothing but crickets. As this is the most upvoted issue of the past year, which you seem content to falsely close as a duplicate, I can only say that the TypeScript community is gravely disappointed in your failure to resolve this issue.
The problem is, now you're saying that import './foo.js' should also resolve to foo.ts for all of these purposes, which just seems wrong. If I have a foo.js file in my source directory, now I cannot access it.
We've looked up the .ts
input for a .js
extension for years - that's not new behavior. Likewise, if you're in a monorepo like setting with many packages, if you said require("other_package/foo")
, that resolves to the output location of "other_package/foo"
(something like ../packages/other_package/dist/foo/index.js
) which we then map back to the input file (something like ../packages/other_package/src/foo/index.ts
). Again, we've done this remapping for a very long time; some of the tools built around us have evidently taken some simplifying shortcuts in their resolution behavior, unfortunately, but we've always done this. Not only would it be inconsistent and incomplete to do otherwise, it'd be a breaking change.
@weswigham wrote:
(…) it’d be a breaking change.
Oh, c’mon, that’s sheer bullocks. I’d be willing to take the effort of writing a concrete proposal to implement this without requiring any breaking changes. It might require a new config switch, but I think that’s worth it. But I’ll first need some confirmation you’d take such a proposal seriously given your stance so far.
Absolutely not. We've said as much and rejected such proposals in the past already. Countless times. Before esm was even a thing, even.
I hope you see now, this was a mistake. Just because the mistake was made years ago, doesn’t make it right.
I don't want to explain again because I did upthread and in other threads, but unlike extension rewriting, which is error prone and unable to handle dynamic imports, our method, of not touching the imports the user wrote and leaving them as-is, has none of those problems. That's a real benefit. Just because some new runtime feature has made some style harder in combination with it doesn't negate those advantages or make the decision a mistake.
at least not to the same extent
It exists to exactly the same extent with any compiled js code. Even if the input and output extensions are both js, the compilation process can (and often does) change the output directory structure.
But now you introduced the node16 in a way that makes this impossible
I'd have loved node's cjs to be forward compatible with esm, and node's esm to be functionally similar to node's cjs, I advocated for that for years; but neither are true, and we can't accurately type check or resolve specifiers in your code if we pretend they are. We will miss real errors, that will cost real people real time and money in the real world. The differences are fundamentally irreconcilable outside of a whole-application processing pipeline like a bundler for a webpage does (where you can statically link up all the esm in your tree and remove the async-ness yourself, while also imposing your own specifier resolution scheme). And these modes arent explicitly for bundlers, they're for node. If you're are using a bundler that's getting all wishy-washy with your resolution rules, it honestly needs custom checking and resolution (in the same way deno and the browser honestly do nowadays, we just don't have the time to maintain more targets) - otherwise you'll have to be satisfied living with the overlap between what specific special stuff the bundler supports and what the runtime we're checking for actually supports.
met with nothing but crickets
I'm confused with how actively engaging in the issue and explaining our design constraints, rationale, and behavior, repeatedly, is "crickets". If you actually wanted crickets we could always lock the issue. ;)
A question for you @weswigham -
If CommonJS had always required file extensions from the very beginning, before TS was a thing, do you think TS would have been designed the same as it is now, to require importing .js
files?
Probably. Considering cjs has always allowed extensions, and, correspondingly, so have we (the output one) for a long time.
Allowed and required are two very differently things. I find it very hard to believe that TS would be designed from the ground-up with the requirement that you have to import non-existent .js
files. That would be a real head scratcher (as is your principled stand to not fix this issue).
some of the tools built around us have evidently taken some simplifying shortcuts in their resolution behavior
No, tools implement the Node resolution algorithm, which is what TS claims to support as well. Nowhere in the node resolution algorithm does it say to replace .js
with .ts
. If TypeScript wants to have it's own non-standard resolution algorithm I guess it can, but then it shouldn't claim to be equivalent to Node's.
In order to implement this behavior, every other tool will need to have TypeScript-specific behavior as well. We'd need to check if the file the import is contained within is a .ts
or .tsx
file, and if the specifier ends in .js
, .cjs
, or .mjs
, then replace that with a different extension. We'd have to search through the filesystem to find out which extension to use, either .ts
or .tsx
. If both exist, we'd have to pick one.
This is a significant difference from the node resolution algorithm. It is also less deterministic and explicit, because you can't specify the exact file you want, you have to rely on the resolver to guess which one you might have meant.
Since I was @-mentioned here, I will make only one contribution to this, uh, discussion, if you can call it that.
Earlier in this thread there were arguments about how writing .js
extensions when you’re targeting systems like Deno, Parcel, Webpack, etc. is both painful and nonsensical.
node16
and nodenext
were not made for those. They were made for Node. It is extremely likely that you should not use these modes if your module resolver is not the one built into Node v16+! We thought that putting node
in the name would make that clear, but there seems to be a lot of confusion on that front..ts
file extensions—it will likely require noEmit
(or emitDeclarationOnly
), and it will definitely not rewrite those extensions if emit is allowed. The idea is that these make sense for systems like Deno and bundlers which truly do take these module specifiers and resolve to other .ts
source files on disk. In other words, in all cases, you write module specifiers that are valid for the end system that is going to consume them. You can track #37582 for this.Please understand that these are very different use cases from compiling code for Node, we are looking into making them better, and they are not arguments for why extension rewriting needs to exist.
How're they nonexistent? They're the name of the file you're importing at runtime. Say you wanted to query import.meta.url
for the current module's path - it's going to be a .js file, not a .ts one, it's going to be in the output dir, not the input one. It would be super strange and a massive disconnect for dynamic resolution machinery to operate by one set of rules (all js extensions in the output dir all the time), while static resolution invented a different set (eg, ts extensions in the input dir). Now, again, in a bundler, they may shim these mechanisms so these statements aren't true, but, again, these node modes are for modelling node, not bundlers.
node16
andnodenext
were not made for those. They were made for Node.
My two cents here. While this is true, the current situation is that when it is one of the target outputs, it requires the .js
to be specified.
Meaning you can have it one way, but not the others. That's completely breaks the multi-output support. That's the problem we are facing now.
Yeah, bundlers usually rewrite import.meta.url
and other things like this to act as if they referred to the original source files as well, to preserve the same behavior as if they were run directly in Node or a browser. The individual compiled JS files will never exist on disk, only the final bundles, so it feels weird to reference them rather than the source files.
Personally, I'm fine with having another mode that allows .ts
. I don't really mind whether the TSC compiler itself rewrites imports or not. I would probably prefer it not to do anything by default, and let the bundler handle that. In my mind, the problem is that TS the language currently requires either no extension or .js
depending on the mode, and doesn't allow .ts
. This means that even if a bundler allows this, users still can't do it because TSC and IDEs error during type checking. If the error occurred only when compiling using TSC instead that would be better because it would allow other resolution behaviors that might not make sense for 1-1 compilation.
Perhaps this is also a documentation problem. If the node16
mode is only meant to be used when you are compiling files 1:1 and targeting directly running in Node rather than through any other tools, then that should be clearer. Right now, tools are getting a lot of issues to support rewriting paths (in reverse of what is proposed here), perhaps because people are misusing or not understanding the modes.
Related, if you've stumbled into this issue and are ever having trouble with a bundler, the somewhat old strategy of running the TS compiler on the code first and then piping that output into the bundler generally works, if somewhat more build-time-intensive. The issue is usually with the various "ts compat" tools and plugins people have written to let people consume TS code directly in the bundler - TS just doesn't have a target that reflects the synthetic environment these compat layers usually present to people, since they often take shortcuts in how they build a TS project graph (ie, not using our compiler APIs to build said graph in the same way we do), and rely on being close enough that it usually doesn't matter. It just sometimes does 😅
Bug Report
TypeScript 4.7 RC introduces the
"module": "node16"
setting. When used, this requires users to use the.js
extension in import paths. I strongly believe this to be a mistake and TypeScript should instead allow users to use the.ts
/.tsx
extension of the source file. It would then be the TS compiler’s responsibility to rewrite this to the output file as necessary.I have several reasons for requesting this change:
.js
extension most likely does not exist on disk. It may exist on disk if a build has been created and the output is placed right alongside the sources (not a common configuration), but otherwise the file is simply not there or in a different place than one might expect. This is confusing to users, because they need to manually consider how output files are created..js
extension makes the source code less portable for full-stack projects. If a user wants to use the same sources for a frontend project that uses a bundler, they will be out of luck. A bundler would be able to understand a reference to the.ts
file (because that one actually exists on disk!), but would struggle with references to files that do not exist for its configuration..js
extension makes the source incompatible with Deno projects, as they use the.ts
/.tsx
extensions, because those are the ones that actually exist..js
extension is inconsistent with type import. Doesimport type { MyType } from “./myFile.js”
even work? It would not make sense because the JS file contains no types. But if it doesn’t work, does that mean I still have to use the.ts
extensions only for the types? That would be highly annoying.🔎 Search Terms
extension rewriting esm module
🕗 Version & Regression Information
TypeScript 4.7. RC1
🙁 Actual behavior
I am forced to write
import { myFunction, type MyType } from “./myFile.js”
despite “./myFile.js” not being a valid path in my project.🙂 Expected behavior
I should be able to write
import { myFunction, type MyType } from “./myFile.ts”
because “./myFile.ts” is the actual file I am referring to. Upon compilation I would expect TypeScript to rewrite the path so the output works correctly too.