Open GarboMuffin opened 4 years ago
Consider AssemblyScript as a target. It is TypeScript compiled to WebAssembly https://www.assemblyscript.org/
Another possible thing to consiter is c, c++, or rust because they are popular with webassembly and receive the most attention when it comes to develoupment, leading to webassembly enhancement and better documention.
In the context of this project we would have to generate WASM bytecode dynamically at runtime. Adding a dependency on rustc at runtime is not viable for us
That might take a long time to calculate especally for large projects that turbowarp is so famous for.
I had a great idea
What if you prepared pre-compiled webassembly functions for each block?
And then simply embeded using webassembly API
Compiling to WASM is entirely theoretical at this point and it's not clear whether it would actually be any faster because WASM <-> JavaScript interop is not free, and WebAssembly is not a magic cure to all performance issues. Browser's really don't want websites to syncronously compile WASM, which is a real problem for an app like TurboWarp that dynamically generates code at runtime and has to execute it immediately or else projects misbehave.
The JavaScript that TurboWarp generates is still significantly slower than the same code written by a human. We have a ways to go before WASM will be necessary to go faster.
That is what hand-optimization is for according to the article you linked. Satifying the user takes hours and hours of hard, tough work. Because users don't know that in real life coding is a lot harder than snapping together blocks.
The most easy solution is creating turbowarp's own Bytecode, and then running it inside a VM/Engine implemented in WebAssembly (C/C++)
it will be faster than compiling webassembly modules dinamically or converting to JS Also execution should be faster than the current "compiled" JS code
For those who don't know, WebAssembly modules are not interpreted, they are compiled AOT to machine code before execution.
So a TurboWarp VM/Engine should be a pratical ideia
I am currently working on a prototype...
@GarboMuffin
A bytecode interpreter in WASM will not necessarily be faster than letting the browser's JavaScript JIT figure out how to optimize our scripts
Because javascript runtimes arent really standard and their behaviour change depending on the runtimr and the execution context.
There is so much entropy
JS is a dynamic and object oriented language. in a custom bytecode we can have guarantees that we cannot have in JS code.
WASM is very efficient, near native and sandboxed.
As a personal opinion, i think the actual Scratch->JS compiler is still very inefficient
Everything you said is correct but it doesn't prove that a bytecode interpreter in WASM will outperform our JavaScript as interpreting bytecode has a very non-insignificant performance overhead.
I am curious how well it performs. Do let me know when you having a working prototype to benchmark.
A bytecode interpreter in WASM will not necessarily be faster than letting the browser's JavaScript JIT figure out how to optimize our scripts
It will be faster if well designed
Compiling WASM modules at runtime is not pratical because of the specifications of the binary format. The encoding of WASM modules are heavy and inneficient for code generation.
The only alternative is creating your own VM inside WASM
Everything you said is correct but it doesn't prove that a bytecode interpreter in WASM will outperform our JavaScript as interpreting bytecode has a very non-insignificant performance overhead.
I am curious how well it performs. Do let me know when you having a working prototype to benchmark.
Sure!
Any updates on this?
It may be possible to compile a subset of Scratch blocks to webassembly for performance.