Open trusktr opened 3 years ago
No, currently it is in separate Repo because it will be rewritten from scratch.
For projects, how compatible is it with v1?
It will be a superset of the first version, so it will have backward compatibility.
Idea: make it compilable to WebAssembly! Is there an opportunity to work with @dcodeIO from AssemblyScript perhaps? Maybe Hegel can generate ASTs (in a way that doesn't have the downside of TypeScript transforms), and AssemblyScript can use them to map to its Wasm output?
@dcodeIO Do you have a list of all the downsides you experienced when trying to make AssemblyScript on TS transforms?
@CrazyPython It very depends on task. See another examples: https://github.com/acutmore/type-off https://twitter.com/matthewcp/status/1390013392950284291 https://github.com/MaxGraey/as-string-sink#benchmark-results
btw issue with growing array already fixed
AssemblyScript influences you to write faster code, but you can write the same fast code in JavaScript, if you know what to avoid.
but you can write the same fast code in JavaScript, if you know what to avoid.
That's not really true. Ofc you could do something like this in theory. Or you could use d8/chrome with --trace-deopt
to figure outing de-optimizations paths and polymorphic code but this really hard. Also as I mentioned some runtimes (builtins) methods / classes like string concatenation you can't speedup at all.
And last but not least WebAssembly will support SIMD soon (after Chrome 91, FF 89). JavaScript will never support SIMD. SIMD.js was frozen
Thanks for the links.
WASM and JS use the same optimizing backend. You can manipulate both to get similar results.
Also as I mentioned some runtimes (builtins) methods / classes like string concatenation you can't speedup at all.
My guess is this JS code is just as fast. Benchmark this on V8 and/or JSC, two compilers that can inline and interleave generators into other code.
function toList(arr) {
for (let i = 0; i < arr.length; i++) {
yield* arr[i];
}
}
Either way, if you want a fast compiler, it comes from writing a character-by-character streaming compiler and avoiding IRs and complex data structures whenever possible. You can do that in any language.
WASM and JS use the same optimizing backend. You can manipulate both to get similar results.
Yes and no. First of all JS hasn't some operations like popcount
, ctz
, rorl
, rotr
per-bit reinterpret int to float and vice versa. See how this affect to performance. The reason is very simple. Also turbofun have special optimizations only for wasm backend which not yet enabled. Like this one: https://bugs.chromium.org/p/v8/issues/detail?id=11298&q=unroll&can=2. And still not apply some important optimisations for i64/u64: https://bugs.chromium.org/p/v8/issues/detail?id=11085&q=wasm%20optimize%20i64&can=2
Also JS compiler has a fairly limited time for sophisticated optimizations in graph IR while LLVM / Binaryen can spend much more time for advanced analysis and optimizations
Or you could use d8/chrome with --trace-deopt to figure outing de-optimizations paths and polymorphic code but this really hard.
Not really. It's just a simple matter of benchmarking different approaches and finding the right one.
First of all JS hasn't some operations like
popcount
TurboFan can nonetheless generate it.
--enable-popcnt (enable use of POPCNT instruction if available)
type: bool default: true
I doubt the examples you named– loop unrolling, popcount– are common at all for a typechecker like Hegel.
Some optimizations are JS-only, that would require manual code in WASM.
Seems like that discussion got really off-topic. @JSMonk How long do you anticipate until a v2 beta?
Hi @texastoland, I planed to implement the MVP in September/October of this year.
"I planed to implement the MVP in September/October of this year."
How is the progress? Soon october now. So a v. 2.0 will be released when?
@JSMonk started working on Kotlin (congrats!) so probably delayed ⏳
Hi @hazelTheParrot. Sure, I started working on it, and hope (not sure about the October), but we will present MVP soon, with a pretty cool news.
don't mean to rush you and I know estimates are hard. Do you think it'll be sometime this year?
I'm working hard to finish the MVP this year, but I'm not sure.
👀🍿
I'd like to start learning, and if there's a v2 branch, I'd like to check it out.
This issue may be better off in Discussions.