Open fathyb opened 6 years ago
Thanks for opening this issue!
I'm going to detail my comment here.
First, I think that warnings will annoy most of the users and will eventually silent it, which defeats the purpose of having warning.
Especially since they are mostly doing safe stuff and we are telling them that we are doing unsafe transformations on their code.
Unlike what I said earlier, I'm now against that.
/*#__PURE__*/
: Babel and uglify (maybe babel/minify?) have support for the PURE annotation. Currently it seems to work out pretty well, but It would be very inconvenient for users to have to write it manually.
Google Closure Compiler
: one of the downside IMO is the strong need for annotation, which allows great optimizations. We have many cases in Babel where comments are misinterpreted and I'm worried about introducing a footgun here.
Also what would be the API?
/** @babel:please leave this **/
({ foo: fn() }).foo
An annotation above the fn
would be better I agree, but that would be very difficult for us across module boundaries and then we have the issue that nobody uses our convention on npm.
the GNU Compiler Collection
(this is my orignial example): I suggest to introduce compilation flags, where you can specifiy the aggressivity/assumptions of our optimizations. Some optimizations where we thought unsafe because the user could have overriden builtins would be possible with a flag, it's opt-in (also we can emit a warning if we detect that it's really unsafe).
-O1
: basically just minification, we assume that the user hasn't overriden some builtin; code size is bigger.-O2
: -O1
+ builtin aren't overriden, more aggressives optimizations; less code.-O3
: -O2
+ reyling on borwser hacks or something; hacky but even less code;In my comment I mentioned -Os
which means optimized for code size. It's not redundant with the previous options because they are general optimization (size and speed), but in the context of minify it's only code size that matter?
the GNU Compilation Collection
(bonus): there's also a -Ofast
flag, intended to produce fast code and not necessarily small. I think that minify is not the right context for that (or has an incorrect name :smile: ) but we could automatically add hidden classes. For example when you assign a new class field we could initlize it AOT in the constructor.If I understand correctly we're currently always on -O1
and other optimization levels are yet to be implemented?
A first step would be to use /*#__PURE__*/
(unless we already do 😅) when possible (isn't this exposed by NodePath.isPure
?). For example we currently turn ['foo', fn()][0]
to 'foo'
, it'd be better to only do this when we're sure fn()
is pure.
+1 for the -Ofast
flag, when working on 3D or audio processing it'is pretty useful to get rid of some abstraction tax without caring about size, like here :
function identity(size) {
return {
3: [
1, 0, 0,
0, 1, 0,
0, 0, 1
],
4: [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]
}[size]
}
function transposeMatrix(input, size = 3) {
var output = new Array(size * size)
for(let x = 0; x < size; x++)
for(let y = 0; y < size; y++)
output[x * size + y] = input[y * size + x]
return output
}
const transposed3 = transposeMatrix(identity(3))
const transposed4 = transposeMatrix(identity(4), 4)
function identity(a) {
return {
3: [1, 0, 0, 0, 1, 0, 0, 0, 1],
4: [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
}[a];
}
function transposeMatrix(a, b = 3) {
var c = Array(b * b);
for (let d = 0; d < b; d++)
for (let e = 0; e < b; e++) c[d * b + e] = a[e * b + d];
return c;
}
const transposed3 = transposeMatrix(identity(3)),
transposed4 = transposeMatrix(identity(4), 4);
-03
+ -Ofast
(super-hypothetical)for
loops
function identity_size_3() {
return [
1, 0, 0,
0, 1, 0,
0, 0, 1
]
}
function identity_size_4() {
return [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]
}
function transposeMatrix_size_3(input) { return [ input[0], input[3], input[6], input[1], input[4], input[7], input[2], input[5], input[8] ] } function transposeMatrix_size_4(input) { return [ input[0], input[4], input[8], input[12], input[1], input[5], input[9], input[13], input[2], input[6], input[10], input[14], input[3], input[7], input[11], input[15] ] }
const transposed3 = transposeMatrix_size_3(identity_size_3()) const transposed4 = transposeMatrix_size_4(identity_size_4())
- Pass 2, evaluate pure functions (turns out faster doesn't always mean bigger) :
```js
const transposed3 = [
1, 0, 0,
0, 1, 0,
0, 0, 1
]
const transposed4 = [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]
@fathyb Check out Prepack, which does pretty much what you’re asking for. Here’s an example of your code, lightly modified so that Prepack doesn’t remove all of it as dead code.
@j-f1 Thought of it while writing this, thanks though that's relevant 👍
Prepack is really more advanced than what I need (and way too experimental for a real-world use), from what I understand it runs the code to serialize the heap and then deserialize it in form of code (it doesn't simplify/minify, it partially executes and outputs the partial state). That's a complete ECMAScript interpreter.
For example Prepack evaluates typeof something
to 'undefined'
unless you register an abstract for something
. I'd like babel/minify
to ignore code it doesn't fully understand. I don't want a partial-evaluator, just a performance oriented option.
A example in the native world is LLVM: out of the box it has speed based optimization (it has -Ofast
and the code I wrote should be simplified to static arrays in certain cases), the Prepack equivalent would be LLPE.
Follow up to the conversation at #814.
@xtuc wrote :
@fathyb wrote :
@vigneshshanmugam wrote :
cc @devongovett for new scope-hoisting in Parcel, related parcel-bundler/parcel#1104