Closed JoakimCh closed 2 years ago
You're right, I should be looking to provide an NPM package. I'll find out how to do that, but if you have any info that would surely help!
If you're new to the whole NPM and Node.js ecosystem then it can be a bit overwhelming to do this.
I guess the steps are something like this:
Figure out a performant way for the Node.js module to communicate with the binary, maybe via Unix domain sockets. Or via stdin / stdout?
Spawn the binary using Node.js "Child process" API.
Research how package.json and publishing to NPM works.
Have a NPM install script download the correct pre-compiled binary for the platform?
Maybe consider writing an "ECMAScript module" instead of using the old "CommonJS" module system. If doing so then here is how to require CommonJS modules inside those newer modules (just bring back the require function):
import {createRequire} from 'module'
const require = createRequire(import.meta.url)
I started thinking about how I would like the API to be like and came up with this.
// Example of a script using the module
import * as minify from '@tdewolff/minify' // Since there is already a "minify" NPM package it must be scoped
let result
result = await minify.file('/whatever.js')
result = await minify.content('js', 'let i=1; console.log(i)')
The module's main file could be something like this:
// The code here will run only once, even if the module is imported several places, hence you can connect to the binary here and the exported functions can communicate with it using any variables defined here outside of the functions.
export function file(path) {
return new Promise((resolve, reject) => {
// resolve(minifiedContentOfFile)
// or reject('error message')
})
}
export function content(typeOfContent, content) {
}
Of course these things are up to you :)
That looks pretty good to me, note the interface that esbuild has https://esbuild.github.io/api/#js-specific-details. Also see https://github.com/evanw/esbuild/tree/master/lib for a reference of the NPM package.
FYI esbuild adopted a new installation strategy in v0.13.0
Any updates on when this can be added? Would like to add your library to https://github.com/privatenumber/minification-benchmarks
I'm actually almost done with an npm package with an API for minify and automatic download of the native binary. Will release soon.
Hi @JoakimCh that sound awesome! Thank you for the effort, I'd be happy to integrate it with this repository. Let me know if you need any help!
Sorry about the late release, I'm struggling with some health problems and had to delay it, but here you guys go: https://www.npmjs.com/package/tdewolff-minify
I didn't write any tests or documentation (other than JSDoc in the source-code) and I can't really be bothered with that for now... If this is production ready or not I don't have any opinions about, that's up to anyone to decide themselves.
Using this JS API might have some overhead compared to using Minify directly, hence performance tests shouldn't claim that they're testing the native performance of Minify.
const minify = new Minify({maxConcurrency: 4})
Initializing it like this will try to always run 4 concurrent processes when there's lots of files thrown at it.
const minify = new Minify()
This on the other hand will check how many threads your CPU has available and use up to that many processes.
All of these functions other than minify.fileToFile
are using stdin and stdout to have the content minified, hence one minify process can only handle one file and has to be relaunched to handle the next file. Which probably adds the most overhead (waiting for it to be launched and ready again)... It might be interesting if one minify process could keep receiving files via stdin, maybe separated by something like 0x1C (ASCII code meaning "File Separator") and then use the same code in stdout.
@JoakimCh Excellent work! Thank you so much for taking the time, it is highly appreciated :-D
In terms of maintenance, do you wish to maintain the NPM port for the future, or would you like to incorporate it with this repository where you and I both can maintain it? Let me know what you think.
The case for streams, adding an ASCII code for file separation could possibly work, but otherwise we can make a special function for NPM access that for example accepts JS arrays of files or some JSON or binary structure (protocol buffers?). The idea of file separators could work efficiently (but I want to prevent scanning the file twice, first to find the file separators and then for parsing), though there might be an option to do that better (stop parsing at a file separator and start a new parser). Something like that, or by passing the file lengths of the concatenated files in a binary stream...
I'm thinking that I can maintain it, since I plan to use your minifier for a server I'm developing. I'll look into how it can be merged with this repository though.
When it comes to performance. How does it handle the command minify -o out/ *
? Are the files minified concurrently using several threads?
What I should do is to write a performance test which does the same, but using the JavaScript API. So we can analyze the performance difference and how to increase it.
Then if implementing different ways for JS to communicate with minify we will be able to measure them against each other.
Yes, the files are minified in parallel. There is no functionality yet to separate a single stream into different files, and simply concatenating is not possible. Am I correct that you're calling the minify
binary and not the API of the Go library? I notice that you use streams to send data to the minifier binary, how slow is it to launch the binary? It would be nice to test how fast it is to minify file A twice and file B once, where B is double the size of file A. That is, we minify the same amount of data but the difference should be the slowdown due to launching the binary. See the benchmarks directory for some example files ;-)
In any case, it might be quite useful if the binary (not the library) would accept streams that separates files using the 0x1C file separator character. I'm going to work out that idea and come back to you ;-)
Yes, it was easiest for me to use the precompiled minify binary. Since I'm not experienced when it comes to binding Node.js to functions in libraries.
I'm writing some tests now (when more complete I will merge them with my GitHub repository). But I wanted to share some of my findings already:
My API minify.fileToFile
just launches a minify process that does all the work (no stdin/stdout used). And this didn't add much of an overhead when ran on each file in the benchmark folder compared to minify crunching all of them in one run.
There was some, but only if I forced only one concurrent minify process, with 4 concurrent ones there were no difference.
Part of my test code:
const dirents = fs.readdirSync('test_data/', {withFileTypes: true})
for (const dirent of dirents) {
if (dirent.isFile()) {
switch (2) {
case 0: { // node reads the file and feeds it to minify, then node feeds stdout to a file
const readStream = fs.createReadStream('test_data/'+dirent.name)
const writeStream = fs.createWriteStream('nodeAPI_out/'+dirent.name)
jobDonePromises.push(minify.pipe(extname(dirent.name).slice(1), readStream, writeStream))
} break
case 1: { // minify reads the file, node reads stdout
jobDonePromises.push(minify.file('test_data/'+dirent.name))
} break
case 2: { // minify does file to file
jobDonePromises.push(minify.fileToFile('test_data/'+dirent.name, 'nodeAPI_out/'+dirent.name))
} break
}
}
}
case 0 was the slowest one, seemed to be 7 times slower case 1 was around 4 times slower case 2 was the one I mentioned above
JavaScript being single threaded is of course the biggest bottleneck I guess, so reading and writing files using a single thread is probably the problem...
I will try to learn some more though about what's going on.
Yes, a big part of the stream bottleneck will be JS performance. Case 2 has very little data that was send between JS and minify
, since minify
does all the file reading itself. What might be a great idea is that we could write one or more files to a temporary folder, run minify on that folder, then read the results back into JS. That would allow for compressing multiple files and might avoid the bottleneck of stdin/stdout streams between JS and the binary. Surely, the overhead is writing and reading files, so I'm not sure if it is worth it...
I just uploaded the benchmark: https://github.com/JoakimCh/tdewolff-minify/tree/main/tests/benchmark_versus_native I can't be sure that using file separators will help performance, but if you want to test it then I can try writing an alternate API that will be optimized to use them.
Awesome work! I like how it automatically downloads the latest version from GitHub. I'm getting the following results:
Starting benchmarks...
Native minify: 555.978ms
minify.fileToFile: 709.665ms
minify.pipe: 4.091s
minify.file: 3.165s
minify.content: 3.325s
minify.content without file IO: 3.240s
All done.
The minify.content
lines are strange. Are you sure that fs.readFileSync
holds the content of the file, or is it just a file reading handle that will reread the file the next time?
I think we could add one additional test where the files are in-memory (read file content before the test) and test pipe
of the file content and writing to a tmp file, using fileToFile
, then reading from the tmp file. I like the latter one because we can easily scale that to minifying multiple files at once. I think the NPM port should provide two functions: content
that may accept an array of contents, and file
that minifies file to file on disk. What do you think?
Forgot to update:
I ended up using bina to install the binary.
tdewolff/minify
is benchmarked on https://github.com/privatenumber/minification-benchmarks!
Nice, thanks for the addition! Obviously it seems that our interface for JS seems different than esbuild
, because when compared as binaries, tdewolff/minify
is a little bit faster.
The benchmarks should be fair and unbiased as possible. If there's a specific improvement you can recommend I'd be happy to welcome a PR or issue.
With the antd
artifact (largest artifact), tdewolff/minify
is slower thanesbuild
by ~2.4s. tdewolff/minify
is undoubtedly very fast but the JS interface cannot account for a 2.4s+ difference.
I think it does account for it. A quick test using time
shows me that for me esbuild takes 0.587s
while tdewolff/minify takes 0.428s
(37% faster) on the antd artefact. I'll go ahead and see what is making the big difference for the JS port and come back to you ;-)
Appreciate the work a lot by the way!
I confirmed my data locally via binaries (no JS).
I cd
into https://github.com/privatenumber/minification-benchmarks, made sure both minifiers are latest, and ran them with time
and the hyperfine
benchmarking tool (average of 10 runs).
I'm still getting a ~2.7s difference (3.227 s - 500.1 ms).
$ node_modules/.bin/minify --version
minify v2.9.24
$ time node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js
node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js 0.85s user 2.24s system 101% cpu 3.045 total
$ hyperfine "node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js"
Benchmark 1: node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js
Time (mean ± σ): 3.227 s ± 0.374 s [User: 0.858 s, System: 2.469 s]
Range (min … max): 2.824 s … 4.028 s 10 runs
$ node_modules/.bin/esbuild --version
0.14.8
$ time node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js
node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > 0.35s user 0.08s system 98% cpu 0.433 total
$ hyperfine "time node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js"
Benchmark 1: time node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js
Time (mean ± σ): 500.1 ms ± 71.0 ms [User: 401.7 ms, System: 94.4 ms]
Range (min … max): 433.2 ms … 629.2 ms 10 runs
Curious what specific commands you're running?
How strange, I run the same commands as you do, but I get 0.205s
for tdewolff/minify and 0.407s
for esbuild (I plugged in the power cable which makes it faster than the last time). I'm really struggling to understand that yours seems to be so extremely slow...what computer are you running on? I'm on an Intel i5-6300U @ 2.4GHz...
Local environment info:
$ npx envinfo --system
System:
OS: macOS 12.0.1
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 7.11 GB / 64.00 GB
Shell: 5.8 - /bin/zsh
But for minification-benchmarks, benchmarking runs on GitHub Actions (which is yields the same results as my local env): https://github.com/privatenumber/minification-benchmarks/runs/4654820187?check_suite_focus=true
According to docs, it has:
Ubuntu 20.04.3 LTS: https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md
And what do you mean by the power cable that makes it faster?
(BTW, happy to take this to convo to a different Issue since this is a different topic)
Well, the power cable on my laptop enables higher performance ;-)
Could you please try using the --sequential
and -o=out
options for the binary (try the latest binary v2.9.25)? Perhaps this is a trivial problem of concurrency that doesn't work well for one file, or perhaps some buffering/streaming problem for stdout that we prevent by writing to a file. For esbuild you can use --outfile=out
for the same thing. It is simply impossible that tdewolff/minify
is not in the same ballpark, there must be something off.
Seems using -o
to output the file made all the difference! Thank you.
$ hyperfine 'node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js' 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js'
Benchmark 1: node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js
Time (mean ± σ): 188.3 ms ± 13.0 ms [User: 211.6 ms, System: 32.2 ms]
Range (min … max): 168.5 ms … 215.6 ms 15 runs
Benchmark 2: node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js
Time (mean ± σ): 393.0 ms ± 17.8 ms [User: 326.7 ms, System: 65.7 ms]
Range (min … max): 369.1 ms … 422.0 ms 10 runs
Summary
'node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js' ran
2.09 ± 0.17 times faster than 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js'
Curious if your environment didn't yield this performance difference between using pipe >
and -o
?
Wonder if there's anything that can be improved in the way tdewolff/minify pipes output because any JS API that wraps tdewolff/minify could be limited by this.
I will look into updating the minification benchmarks with this later but I might have to include the file-system read into the tdewolff/minify benchmark because this limitation will exist for any user that wants to use tdewolff/minify at peak performance.
I'm not sure if this will be a fair performance comparison because the speed of file-system read can vary and is not reflective purely of the minfier's performance while other minifiers provide a JS API that directly hands over the output via stdout.
Yes, surely I need to buffer the writing to stdout, perhaps this is automatic on my system but not on yours. Really unsure, I'm using bash
, and you?
I'm using zsh. My env details provided in comment above:
$ npx envinfo --system System: OS: macOS 12.0.1 CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz Memory: 7.11 GB / 64.00 GB Shell: 5.8 - /bin/zsh
I ran benchmarks using bash
but piping with tdewolff/minify
is still significantly slower:
$ hyperfine \
> 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js' \
> 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js' \
> 'node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js' \
> 'node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js'
Benchmark 1: node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js
Time (mean ± σ): 405.0 ms ± 23.7 ms [User: 336.0 ms, System: 68.4 ms]
Range (min … max): 375.2 ms … 437.9 ms 10 runs
Benchmark 2: node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js
Time (mean ± σ): 350.8 ms ± 12.6 ms [User: 289.5 ms, System: 57.1 ms]
Range (min … max): 335.4 ms … 375.7 ms 10 runs
Benchmark 3: node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js
Time (mean ± σ): 201.0 ms ± 66.3 ms [User: 203.0 ms, System: 34.0 ms]
Range (min … max): 170.7 ms … 388.4 ms 10 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (388.4 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Benchmark 4: node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js
Time (mean ± σ): 2.137 s ± 0.144 s [User: 0.564 s, System: 1.621 s]
Range (min … max): 2.021 s … 2.504 s 10 runs
Summary
'node_modules/.bin/minify -o=antd.tdewolff.js node_modules/antd/dist/antd.js' ran
1.75 ± 0.58 times faster than 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --minify > antd.esbuild.js'
2.01 ± 0.68 times faster than 'node_modules/.bin/esbuild node_modules/antd/dist/antd.js --outfile=antd.esbuild.js'
10.63 ± 3.58 times faster than 'node_modules/.bin/minify node_modules/antd/dist/antd.js > antd.tdewolff.js'
Also double checked without hyperfine
and with/without time
and I can confirm changing to bash
doesn't affect the result.
In any case, I'm getting the same performance via GitHub Actions using ubuntu-latest
so you should be able to use that as a testing environment.
Haven't tested but I think this workflow will suffice:
name: Benchmark tdewolff/minify
on:
push:
branches: [master]
workflow_dispatch:
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- name: Benchmark tdewolff/minify
run: |
npm i antd
npx bina tdewolff/minify
time minify node_modules/antd/dist/antd.js > antd.tdewolff.js
Thanks, I've added the benchmark and added buffering for stdout. Should behave fine now in v2.9.26
Thanks!
I just updated https://github.com/privatenumber/minification-benchmarks and tdewolff/minify
is the fastest minifier now 🎉
Shared the news here: https://twitter.com/privatenumbr/status/1476378923231547399
Great news! I have some other optimizations in mind that I'd like to look at. Meanwhile, I also want to see how to improve the gzip compression as it seems subpar compared to others. Thanks for the great work!
@privatenumber I've released a new version with improved GZIP compression that is 4% faster :grinning:
Thanks! Re-benchmarking: https://github.com/privatenumber/minification-benchmarks/runs/4741728934?check_suite_focus=true
BTW, if you can publish to npm (even just the binary without a JS API), my repo will automatically update the benchmarks.
@privatenumber Thanks! Unfortunately I'm not in charge of the NPM library, and it seems that it doesn't need to update its version in NPM in order to reflect a new version release from this main package (looks like it automatically collects the last version). I suppose that whenever another minifier is updated, this one gets updated as well in the benchmarks?
Yep. It's not ideal though.
Perhaps you can look into an official npm distribution?
I am really looking forward the npm package :) I can see there is some work proceeding in https://github.com/tdewolff/minify/tree/master/bindings/js and would love to see this package on npm.
The issue with https://github.com/JoakimCh/tdewolff-minify is that at the moment it does not support darwin-arm64
Yes, I've been working on publishing a JS package officially. The honest truth is I have almost zero knowledge in working with JS packaging and production code, and it's not trivial to publish a JS package with Go (or C) code for each type of platform. I'm also limited in time!
I would really love to get some help on packaging the JS package that is in the bindings
directory. Do you happen to know a similar project that I can imitate?
I can try to provide help, will have a look. esbuild uses the "optional" packages trick to choose the right platform. They do put "cpu" and "platform" filter for each binary package prebuilt in CI and then they add them all as "optionalDependencies" in the main package. I think SWC uses the same.
One option that could simplify things is following what esbuild does - they do NOT provide C++ bindings, they spawn the process asynchronously - and this is what https://github.com/JoakimCh/tdewolff-minify/blob/main/source/tdewolff-minify.js does - is definitely much easier. This solves also the multiplatform issue because go supports cross compiling also for darwin 64. So what I would do is to change CI to compile for as many platform as it makes sense to have and deploy on npm multiple packages all in one go. I can try to raise a PR if you think this approach would work :) let me know - and of course this would be based on @JoakimCh package and his comment or support would be really appreciated as well. Thank you.
Raised https://github.com/tdewolff/minify/pull/484 - just adding arm64 in Makefile as a target should be enough to allow people with M1 to run the minifier and install it via npm using Joakim package for the time being. I will think about the C++ binding
I see what you mean with asynchronous calling. The bindings I was trying to create were actually to see if we can avoid that. The problem with asynchronous calling is that it is fairly slow hehe. Anyways, we need to be sure if the C++ bindings is not going to work, because if so we should just continue using the work of @JoakimCh
I put together a very simple rollup
integration using Joakim's package here: https://www.npmjs.com/package/rollup-plugin-tdewolff-minify
Looking forward to updating to using the C++ bindings when they become available!
I've been using https://github.com/marudor/libxmljs2, also publishes C++ bindings, maybe it'll be a good example how to get this done?
Hi @perrin4869 I've updated the binding a bit and it seems to work...:-S The build process for JS bindings is pretty complicated and I admit I don't really understand what happens, but I can now import the library and use it, see https://github.com/tdewolff/minify/blob/master/bindings/js/test/test.js
That is only for node-v93-linux-x64-glibc
as the other platforms are missing still (not sure if I can build those on my computer?). Can you verify if it works for you?
Hey, thanks for publishing!
I tried to migrate to using @tdewolff/minify
here: https://github.com/dotcore64/rollup-plugin-tdewolff-minify/pull/2
It actually worked without problems in my computer locally (slackware64-current with glibc-2.35-x86_64-2), but the CI fails in all node versions:
Error: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/@tdewolff/minify/build/Release/obj.target/minify.node)
at Object.Module._extensions..node (node:internal/modules/cjs/loader:1189:18)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.<anonymous> (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/@tdewolff/minify/index.js:1:18)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at ModuleWrap.<anonymous> (node:internal/modules/esm/translators:170:29)
at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
at async Promise.all (index 0)
at async ESMLoader.import (node:internal/modules/esm/loader:385:24)
at async importModuleDynamicallyWrapper (node:internal/vm/module:437:15)
at async formattedImport (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/mocha/lib/nodejs/esm-utils.js:7:14)
at async Object.exports.requireOrImport (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/mocha/lib/nodejs/esm-utils.js:38:28)
at async Object.exports.loadFilesAsync (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/mocha/lib/nodejs/esm-utils.js:91:20)
at async singleRun (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/mocha/lib/cli/run-helpers.js:125:3)
at async Object.exports.handler (/home/runner/work/rollup-plugin-tdewolff-minify/rollup-plugin-tdewolff-minify/node_modules/mocha/lib/cli/run.js:370:5)
btw, slightly unrelated to this issue, but I was thinking that new node.js projects might be better off using esm
modules from the get-go? I could put together a PR for that if you would be interested.
Yes please! I need all the help I can get on this one ;-)
Oh, I tried rerunning the tests on latest ubuntu 22.04
, and it passed the tests on node 16, but failed for node 18 and for node 14. I guess it's a matter of making builds for the remaining node versions and glibc versions, maybe should look into setting up github actions to do that. I can try to look into it this weekend too :)
Yes, we'd need to add support for various architectures and node versions and glibc, but isn't that getting a bit too many combinations? You think building from source could work on the remaining architectures?
GitHub actions sounds nice, can it compile on different architectures? Or can we force node-gyp to compile for another architecture?
Two ideas:
binding.gyp
. We could condition on OS and architecture and GLIBC etc., which point to different static library builds compiled by Go (i.e. the minify.a
file setting GOOS
etc)--target_arch
etc for node-gyp
, I'm not sure if it works...libxmljs2
that I linked above seems to have automated the process, maybe it'll be enough to do whatever it is they are doing?
There's also the Node-API
which seems to not require different builds for different node versions?
Nice, it seems like we should be using the Node API instead of the V8 engine, I will work on moving the code towards that. Meanwhile, that document mentions using https://github.com/prebuild/prebuildify for building the various binaries for each architecture and upload them together with the NPM package (instead of uploading them in another location such as GitHub). This has my preference to avoid cluttering up the releases here on GitHub.
Anyways, what is the nan
package for that you included in a PR?
Well, it contains a bunch of macros and helped in getting the current bindings work inside worker threads, but I guess it won't be needed anymore if we move to Node-API
:)
Just tried the new implementation on the rollup integration and all tests passed like a charm!
Thanks for the feedback @perrin4869 ! Great to hear it works, does that mean you can build the bindings from source and it works for all Node versions? I suppose we still need to prebuild binaries (with https://github.com/prebuild/prebuildify) so that anyone can install it without needing Go being installed. Am I right?
I'm creating a web-server in Node.js and at first I used a minifier written in JavaScript, which as you can guess had a poor performance. Looking for something better I found this and esbuild, esbuild was easy to integrate into my project since it has an NPM package exposing a module which I can import into Node.js which contains an API to use it with Node.js.
All I have to do is to
npm i esbuild
and then for example:I would rather use your minifier though, so it would be nice it that could be made possible in the future.