Open rom1504 opened 1 year ago
alternative way for 2 problem
// file.js
export function example() {
console.log("Hello")
}
example()
// index.js
import './file.js'
What about this code?
alternative way to use require('name')()
esm has many issues including poor support for the require("prismarine-item")("1.19.3") style used in many places in PrismarineJS
@rom1504 consider this:
esm should be used in the source and cjs in the output (when the project is buildable). Why?
esm has better syntax highlight / better dx, so its much easier to read ESM code.
Only esm has this since you can't do weird things like dynamically change exports e.g.
module.exports.activate = () => {
exports.deactivate = () => {...}
}
esm has many issues including poor support for the require("prismarine-item")("1.19.3")
This pattern should not be used, its better to have a structured file where all imports are on the top, so you can easily see what other modules the file uses. Also, esm pattern works better with bundlers. In frontend, where every size matters there is a term called three shaking. It also works only with esm in most cases. The fact that require("prismarine-item")("1.19.3")
works and is commonly used across the modules doesn't mean its good.
So imo esm should be used where possible but without forbidding the classical require functions. I also prefer to use .mjs for scripts since they are not meant to be used by other consumers (don't have API) and its much easier to read that code because of imports & top-level await.
Personally, I don't find much value from TypeScript beyond frontend/React-based projects (where you're going to need a bundler regardless, and may as well use TypeScript). But for backend code, I've not noticed any net productivity gain (after years of use) for tested backend code with TypeScript. Perhaps there can be if the project is sufficiently big, but that would mean more to me that the project should be downsized and lots of code/complexity should be refactored out rather than that being a calling to rewrite things in a typed language.
Pretty much everything that types can catch, unit tests can catch also, and you're going to need unit testing anyway to catch far more than just type errors. With 100% test coverage then types don't matter much at all. As for the autocomplete situation, I understand when it's outside code. But for internal code on your current project, if you can't to a good degree memorize the current project structure, that's IMO indicative that the current project is too complex. Also, you can use your own project's .d.ts file for internal lookups, it doesn't have to just be for external code.
As for downsides:
As for the doc/type desync issue, I like the idea of generating the doc from the .d.ts files. Of course it would require a custom parser, but it allows the doc to both be machine readable and human readable (with markdown formatting) at the same time and not be duplicative or desync.
if you can't to a good degree memorize the current project structure, that's IMO indicative that the current project is too complex
Yes I understand this is my personal issue, I'm bad at memorizing things. Cspell fixes my typos from time to time. But it is super beneficial for first-time contributors or someone who just wants to explore the codebase (definitions, references). Autocomplete plays a significant role here. And I really care about it in my OSS projects. Yes you can use .d.ts but it doesn't enforce other contributors to sync it with the runtime part and also eliminates the way to skip writing your code twice (eg function auto-inferring). It's better when types live inside of your actual code.
less hackable, more strict syntax rules
heard it hundreds of times, if you don't know how to use it won't be less hackable for you. From my personal experience, I can say I can write any code in ts freely (I sill think its okay to use anys or even //@ts-nocheck).
waste time to build: even a half second delay can be frustrating when you want a quick write-and-test workflow
doesn't 0.2s build don't give you a quick write-and-test workflow? if you care about speed there are different options...
make debugging more challenging because you now have to worry about .js/.ts mapping
Agree, but in most setups this is already done. Don't see a problem here.
As for the doc/type desync issue, I like the idea of generating the doc from the .d.ts files. Of course it would require a custom parser, but it allows the doc to both be machine readable and human readable (with markdown formatting) at the same time and not be duplicative or desync.
This is what I've done in flying squid recently :)
Also btw I was saying about using esm not ts. Ts requires esm but esm has nothing in common with ts. In some node.js app where the build step is impossible, I try to use .mjs (if this is an app code, but lib). yes I also add @//ts-check
at the start so I can catch stupid mistakes like referencing not defined variables instantly (useful when prototyping)
anyway thanks for sharing your personal experience! I totally understand your point, but in big oss projects there should be stronger linting / code documentation patterns that ts successfully implements. And when I say big projects I don't mean projects that need to be split into smaller packages. For example, flying-squid would stay the big project: you either have either a lot of code inside or a lot of code outside and still a lot of packages inside the project. Also I don't think we're going to create hundreds of small packages to just do change worlds inside of flying squid server for example. There is a lot of plugins code that is flying-squid specific.
This pattern should not be used
Why ?
using just code examples:
// some require calls go here
// ... a lot of code ...
const doThing = (version) => {
const registry = require('prismarine-registry')(version)
}
import RegistryLoader from 'prismarine-registry'
// ... a lot of code ...
const doThing = (version) => {
const registry = RegistryLoader(version)
}
Which variant do you think is cleaner?
With the 2nd one it's much easier here to figure out that the file uses (or may possibly use) prismarine-registry
module. You don't need to search for all require
s in the file to see what dependencies it has.
the code that uses esm is much easier to understand (at least for me) also esm is a much newer format, don't see any problems with it (if we don't take into account poor node.js implementation)
we don't have the // ... a lot of code ...
part
but yeah we could change to the second pattern even moving to esm. It doesn't change much
commonjs is adopted by more packages
is still the most valid point.
ESM is still not used by many node.js packages
https://nodejs.org/api/net.html I mean look; even the node.js doc is still using require
we don't have the
// ... a lot of code ...
part
you can't say objectively, it really depends...
ESM is still not used by many node.js packages
two important parts from my comments:
if we don't take into account poor node.js implementation
esm should be used in the source and cjs in the output (when the project is buildable)
Even I myself always use a build step with commonjs target output. I always try publishing cjs (until this is front-end only code). I like how bun and esbuild support both require and imports in the same code. because sometimes you sill need to require thing only at a specific location in the code and you can't use await. Node.js and Vite is really annoying at this point. Btw since esm is standard many new devs don't know about what is commonjs at all :)
commonjs is standard for node.js
current consensus is NO
why