Open paulshapiro opened 3 years ago
There was some prior discussion about WASM support in this thread.
Thanks - I see.. but I think it's clear there are many strong cases for it. It's almost reducible to arguing why we want WASM at all. Can we re-open discussions somehow?
At the very least, it would obviate many cases necessitating bridging to a native module, not to mention redundant bridge code impl and maintenance.
@tmikov Note: I'm totally noob in WASM and everything that I've written down below can be nonsense.
One use case I'm dreaming of is an ability to write extensions to databases in JavaScript.
As SQLite is an embedded database it would be awesome to write some function in pure Javascript which can be compiled to WASM.
Later driver written in JSI can (assume JSI and WASM can interact and in high-performant way) can consume and run that WASM with native performance (???) (with help of https://www.sqlite.org/c3ref/create_function.html), returning to Javascript only probably filtered subset of data.
How would it be performat to call Javascript function as Application-Defined SQL Functions through JSI?
@tmikov I see, thanks. I still think there are strong arguments and use-cases for it, and it would bring the engine to closer feature parity with e.g. JSC. What would it take for me to implement WASM support in Hermes?
Getting wasam support would be awesome. Has anyone got some solutions ?
I am curious: why would you prefer Wasm, when you can ship actual natively compiled C++ code with your RN app?
@tmikov how can I do that ? I've seen native modules but it's with Java only can you give any guide or reference.
@ShivamJoker hmm, that is a good question. I know that it is possible and it is not technically difficult - after all, RN has C++ code, so it is doing exactly that already. Basically you use jsi::Function::createFromHostFunction()
to create instances of native functions callable from JS. Then you register these functions as properties in a JS object, even in the global object, so JS can call them. That's it.
In terms of a specific Howto, how to actually setup your RN project for that, I can't really help you - that is more of a question for React Native. It is not Hermes specific.
I am curious: why would you prefer Wasm, when you can ship actual natively compiled C++ code with your RN app?
react-native-web compatibility for one.
@tmikov We can use local database SQLite or GlueSQL https://github.com/gluesql/gluesql. Compile to WASM and continue working with full-blown sql database with all the features.
FWIW, we have added initial support for Wasm encoded as Asm.js. You can use wasm2js
to transform your Wasm module to Asm.js and compile and run the result with Hermes. That was possible even before, but latest Hermes can optionally recognize the Asm.js intrinsics and optimize them, resulting in about 5x speed up.
It is still very early, highly experimental, and definitely unsupported, but it does work. It requires a custom build of the latest Hermes with the HERMES_RUN_WASM
flag, as well as -funsafe-intrinsics
when invoking the compiler. For now it can only be used to run trusted code.
A small C/C++ example:
unsigned add(const unsigned * arr, unsigned count) {
unsigned res = 0;
while (count--) res += *arr++;
return res;
}
Previously the inner loop was compiled very naively like this:
L2:
LoadFromEnvironment r9, r2, 3
RShift r8, r4, r7
GetByVal r8, r9, r8
ToInt32 r8, r8
AddN r8, r8, r1
ToInt32 r1, r8
AddN r9, r4, r6
ToInt32 r4, r9
SubN r9, r3, r5
ToInt32 r3, r9
Mov r0, r1
JmpTrue L2, r3
With -funsafe-intrinsics
it looks much better:
L2:
LoadFromEnvironment r7, r2, 3
Loadi32 r7, r7, r4
Add32 r1, r7, r1
Add32 r4, r4, r6
Sub32 r3, r3, r5
Mov r0, r1
JmpTrue L2, r3
Lots of more optimizations are possible, but as I said this is very early work.
@tmikov What sort of shape is wasm support in at this point? Does wasm2js still require a custom build of Hermes? Are things likely to be unstable if we try it?
@evelant there has been no change since my last post. The tentative plan is to get an intern to work on the next phase (Wasm bytecode support), but I haven't gotten one yet.
We really need WASM support in RN while we are using some universal platform supported bundler like Taro or RN-web. We are building cross platform(web / native / mini programs) apps that depend on trusted WASM machines to work.
@tmikov Are the optimizations for asm.js still experimental per the above or have they been included in more recent versions of Hermes?
@tmikov point me in the right direction with your tentative plan and I'll see if my progress on a related implementation can make a dent
Sincere question: what is the advantage of using WASM in a RN app where actual native C++ is available? WASM makes a lot of sense in a browser, but a RN app by definition ships a lot of native code already.
Sincere question: what is the advantage of using WASM in a RN app where actual native C++ is available? WASM makes a lot of sense in a browser, but a RN app by definition ships a lot of native code already.
just refer to my comment above, as we don't want to bind our native code again just for the RN side.
Sincere question: what is the advantage of using WASM in a RN app where actual native C++ is available? WASM makes a lot of sense in a browser, but a RN app by definition ships a lot of native code already.
It is a valid question, and native C++ bindings via JSI and TurboModules are a very thorough implementation of an interface that will enhance existing libraries in the Native Module ecosystem. We will all benefit from Partners and contributors who have the capacity and context necessary to provide refactored native modules that incorporate the new architecture. I think that, however, there is a slight disincentivising barrier to entry to developing a native module with C++ to implement an optimization task that is trivial but computationally expensive.
WASM technology is mature and there are some great binaries that are helping React Developers ship quality near-native web applications. As a result, they are empowered to experiment with lower level languages like Go and Rust, which have less intimidating learning curves and implementation taxes.
I do not think you are making the wrong assumption, and I do believe that native C++ bindings are the only way to create a native module which can benefit from all your hard work into everything the new architecture has to offer.
I really believe that if the vision is to create an ecosystem in which react developers can confidently deliver solutions across all platforms, it is worthwhile to take WASM support as a strategic step toward creating a pathway that encourages further exploration into the new native module architecture.
Keep in mind that WASM doesn't automatically provide near-native performance. WASM is more or less binary encoding of a subset of JavaScript. There is nothing that magically makes it fast.
Obtaining near-native performance from WASM requires a sophisticated ahead-of-time compiler or a sophisticated JIT. However, unlike other JS VMs, Hermes doesn't already implement a JavaScript JIT on which to base its WASM JIT. One of the main goals of Hermes was to develop a minimalistic JS runtime, with very fast startup time and minimal memory, under the assumption that UI is not very CPU intensive, precluding a JIT. As with everything else, it is a trade-off.
So, near-native WASM performance in Hermes would require adding a new advanced JIT. Such a JIT (for example based on LLVM) could easily rival the rest of Hermes in size and complexity. This is certainly technically possible, however we would be doubling the size and complexity of the VM, while also adding significant memory and latency overhead (compiling WASM to optimized native takes significant amount of time and memory) only to achieve near-native performance. At the same time, React Native already can use fully native C++ performance. It is simply not a very constructive investment.
It is not all bad. The approach to WASM on which we have been slowly working on, and which I have described in prior posts in this task, would not double the size of Hermes and would not add significant memory overhead. However it does not approach native performance, since it is still running interpreted bytecode. Yes, up to 5x faster than JS in Hermes, but probably still at least 10x slower than native. Again, it is all about trade-offs.
We also have plans to go one step further and implement a minimalistic baseline JIT for the WASM-flavoured Hermes bytecode. "Baseline" means that the JIT doesn't perform optimizations, it simply eliminates the interpreter dispatch overhead. It results in up to 2x speed up, which is still far from "near-native", but it is small, low latency, and low memory.
There is one extra point - a JIT is not allowed on iOS, so WASM will never execute with near-native performance there, no matter what JIT Hermes has.
So, given that:
a native module using JSI remains the best option if performance really matters.
Keep in mind that WASM doesn't automatically provide near-native performance. WASM is more or less binary encoding of a subset of JavaScript. There is nothing that magically makes it fast.
Obtaining near-native performance from WASM requires a sophisticated ahead-of-time compiler or a sophisticated JIT. However, unlike other JS VMs, Hermes doesn't already implement a JavaScript JIT on which to base its WASM JIT. One of the main goals of Hermes was to develop a minimalistic JS runtime, with very fast startup time and minimal memory, under the assumption that UI is not very CPU intensive, precluding a JIT. As with everything else, it is a trade-off.
So, near-native WASM performance in Hermes would require adding a new advanced JIT. Such a JIT (for example based on LLVM) could easily rival the rest of Hermes in size and complexity. This is certainly technically possible, however we would be doubling the size and complexity of the VM, while also adding significant memory and latency overhead (compiling WASM to optimized native takes significant amount of time and memory) only to achieve near-native performance. At the same time, React Native already can use fully native C++ performance. It is simply not a very constructive investment.
It is not all bad. The approach to WASM on which we have been slowly working on, and which I have described in prior posts in this task, would not double the size of Hermes and would not add significant memory overhead. However it does not approach native performance, since it is still running interpreted bytecode. Yes, up to 5x faster than JS in Hermes, but probably still at least 10x slower than native. Again, it is all about trade-offs.
We also have plans to go one step further and implement a minimalistic baseline JIT for the WASM-flavoured Hermes bytecode. "Baseline" means that the JIT doesn't perform optimizations, it simply eliminates the interpreter dispatch overhead. It results in up to 2x speed up, which is still far from "near-native", but it is small, low latency, and low memory.
There is one extra point - a JIT is not allowed on iOS, so WASM will never execute with near-native performance there, no matter what JIT Hermes has.
So, given that:
- It will not approach near native performance
- It will never run fast on iOS (at least until Apple changes their security policies)
a native module using JSI remains the best option if performance really matters.
Thank you very much for clarification. Me, as most others, thought that WASM is fast because of WASM itself (it's "magic"), not because of JIT.
Now I get understanding V8 made it fast.
Keep in mind that WASM doesn't automatically provide near-native performance. WASM is more or less binary encoding of a subset of JavaScript. There is nothing that magically makes it fast. Obtaining near-native performance from WASM requires a sophisticated ahead-of-time compiler or a sophisticated JIT. However, unlike other JS VMs, Hermes doesn't already implement a JavaScript JIT on which to base its WASM JIT. One of the main goals of Hermes was to develop a minimalistic JS runtime, with very fast startup time and minimal memory, under the assumption that UI is not very CPU intensive, precluding a JIT. As with everything else, it is a trade-off. So, near-native WASM performance in Hermes would require adding a new advanced JIT. Such a JIT (for example based on LLVM) could easily rival the rest of Hermes in size and complexity. This is certainly technically possible, however we would be doubling the size and complexity of the VM, while also adding significant memory and latency overhead (compiling WASM to optimized native takes significant amount of time and memory) only to achieve near-native performance. At the same time, React Native already can use fully native C++ performance. It is simply not a very constructive investment. It is not all bad. The approach to WASM on which we have been slowly working on, and which I have described in prior posts in this task, would not double the size of Hermes and would not add significant memory overhead. However it does not approach native performance, since it is still running interpreted bytecode. Yes, up to 5x faster than JS in Hermes, but probably still at least 10x slower than native. Again, it is all about trade-offs. We also have plans to go one step further and implement a minimalistic baseline JIT for the WASM-flavoured Hermes bytecode. "Baseline" means that the JIT doesn't perform optimizations, it simply eliminates the interpreter dispatch overhead. It results in up to 2x speed up, which is still far from "near-native", but it is small, low latency, and low memory. There is one extra point - a JIT is not allowed on iOS, so WASM will never execute with near-native performance there, no matter what JIT Hermes has. So, given that:
- It will not approach near native performance
- It will never run fast on iOS (at least until Apple changes their security policies)
a native module using JSI remains the best option if performance really matters.
Thank you very much for clarification. Me, as most others, thought that WASM is fast because of WASM itself (it's "magic"), not because of JIT.
Now I get understanding V8 made it fast.
I second this. It's always great to get a better understanding of the architectural decisions behind the journey to JSI and the New Architecture.
I ran into CXX — safe interop between Rust and C++ yesterday. Haven't found time to play with it but it could open the door to some level of Rust support for Native Modules.
The high profile implementations of Wasm have been getting more and more complex. For example, Mozilla had to implement multi-level JIT for Wasm, in order to balance fast startup with high performance. In other words, they use the base level JIT initially to get things running right away, albeit slower, then later optimize important functions with a higher level JIT. This is what JS engines have been doing with JavaScript for a long time and is precisely the thing Hermes was built to avoid. So, it is ironic that we are coming back to it, but this time from the Wasm angle.
We will keep working on supporting Wasm. I believe we can go a long way with our minimalistic approach and make performance acceptable in most cases, especially on Android (with a baseline JIT). But admittedly, it is not a high priority. I am still waiting for an intern to work on the next stage.
One interesting idea to think about: perhaps it might be possible to provide build scripts, workflows and APIs in Hermes to make using native code look similar to using Wasm.
One interesting idea to think about: perhaps it might be possible to provide build scripts, workflows and APIs in Hermes to make using native code look similar to using Wasm.
That would be great! I think the desire for wasm support probably boils down to the use case -- at least in my case as a react-native developer I want an easy and seamless way to write high performance cross platform code that runs outside of the main react-native JS thread.
AFAIK the only way to do that now is to manually compile some code for multiple platforms and manually write bridging code to RN, definitely not seamless or easy for most people. Wasm support seems like it would make it much easier, just target wasm from pretty much any language and include your module.
If workflows can be developed to make using native code in popular languages (in particular I'm interested in Rust) as easy as using wasm would be a nice solution too, although it might not be as portable as using wasm if people need to run the code for example in react-native-web.
One interesting idea to think about: perhaps it might be possible to provide build scripts, workflows and APIs in Hermes to make using native code look similar to using Wasm.
Something like assembly script would be amazing for integration that goes along the lines of the idea you've proposed, especially with simplifying typings between the JavaScript component declarations and their usage in native code.
Another idea -- rather than adding wasm support to hermes with all the complexity that seems to entail might it make more sense to integrate an existing wasm runtime like wasmer? https://github.com/wasmerio/wasmer
I'm not sure if that suggestion makes sense in this context however, I don't know enough about the native side of RN to know if such a thing is feasible.
FWIW, we have added initial support for Wasm encoded as Asm.js. You can use
wasm2js
to transform your Wasm module to Asm.js and compile and run the result with Hermes. That was possible even before, but latest Hermes can optionally recognize the Asm.js intrinsics and optimize them, resulting in about 5x speed up.It is still very early, highly experimental, and definitely unsupported, but it does work. It requires a custom build of the latest Hermes with the
HERMES_RUN_WASM
flag, as well as-funsafe-intrinsics
when invoking the compiler. For now it can only be used to run trusted code.A small C/C++ example:
unsigned add(const unsigned * arr, unsigned count) { unsigned res = 0; while (count--) res += *arr++; return res; }
Previously the inner loop was compiled very naively like this:
L2: LoadFromEnvironment r9, r2, 3 RShift r8, r4, r7 GetByVal r8, r9, r8 ToInt32 r8, r8 AddN r8, r8, r1 ToInt32 r1, r8 AddN r9, r4, r6 ToInt32 r4, r9 SubN r9, r3, r5 ToInt32 r3, r9 Mov r0, r1 JmpTrue L2, r3
With
-funsafe-intrinsics
it looks much better:L2: LoadFromEnvironment r7, r2, 3 Loadi32 r7, r7, r4 Add32 r1, r7, r1 Add32 r4, r4, r6 Sub32 r3, r3, r5 Mov r0, r1 JmpTrue L2, r3
Lots of more optimizations are possible, but as I said this is very early work.
@tmikov do you have an example repo for this? would I need a hermes fork or just a patch?
@gabimoncha this is part of Hermes but is controlled by the HERMES_RUN_WASM build flag, which is disabled in the default build.
Hermes builds from source by default at least on iOS doesn't it? Is there an easy way to set that flag so I might try it @tmikov ?
Sorry, I am not familiar with the build process for mobile devices. This is the setting - you can probably change it to default ON and then run the regular build. https://github.com/facebook/hermes/blob/c71cf73e6736f62d2cfbc90451ce8287034113ea/CMakeLists.txt#L231 (Note that this is not yet a supported configuration)
a JIT is not allowed on iOS, so WASM will never execute with near-native performance
GraalJS has SOTA performance and is able to output AOT native binaries that compare to regular JIT performance without having one.
If Facebook actually care and is more interested in improving end users performance vs their own code favoritism then Facebook should benchmark and invest into GraalJS
@LifeIsStrange to be clear, are you suggesting that Facebook should drop Hermes and replace it with GraalJS?
@jpporto @neildhar friendly pings
I am suggesting Facebook should at least 1) add support (and 2) ideally co-develop), for the best JS engines. GraalJS stands out for its AOT native binaries support, its WASM and Swift transparent interop support and most importantly a VM that has performance that can become competitive with Java (not javascript) since it reuse the same infrastructure, and therefore become (once it gets some human resources) the JS VM with the highest throughput. Dropping hermes is not needed for goal 1) but is likely needed for goal 2). I personally don't believe in Hermes, sure it has fast app startup time.. but in many benchmarks it has uncompetitive throughput (and doesn't have a JIT mode..), being on some benchmarks 30 times slower, not even talking about the many missing features and spec correctness issues. Globally hermes is underfunded, for some perspective, it has 3.700 commits, V8 has 76.000.. I hear talks about making it the default but that seems completely delusional. GraalJS already has 57K commits and reuse the GraalVM/JVMTI infra and GCs. Joining forces with Oracle would be a smart move for end users and would have strong potential of being actually faster than V8, not just for mere startup time.. Now that would be disruptive and innovative! And we could have a legitimate discussion for once about being better than web/ionic. Because yes, in case you didn't knew, Ionic on IOS run JSC with JIT mode while react native cannot, and therefore has subpar JS performance. GraalVM is revolutionnary, despite its limited funding, it has in a few years made the fastest R implementation on Earth AND allowed a single university student to significantly outperform the official (and facebook's) PHP implementations https://github.com/abertschi/graalphp/blob/master/results.md and it's only the beginning.
@LifeIsStrange do you have a reference showing the results of GraalJS AOT compilation, performance, binary size?
@LifeIsStrange as a heads up - it's impolite and bad GH etiquette to @-tag a large list of contributors to pull them into a thread.
Not everyone has context or is actively involved in the workstream. Please just comment on the thread and the relevant people will be watching and will respond, or they will loop in the right people.
As I have shown, GraalVM as a platform, easily allow to obtain state of the art performance. It significantly outperforms the reference implementation of the languages: R, PHP and Ruby: And as said, enable for the first time transparent interop with other languages (low cost, automatic, high level FFIs) When used for Java or Kotlin, e.g. on this exhaustive serialization benchmark suite, you can see overall it outperform OpenJDK
This is where the hard part is, making a state of the art backend AKA GraalVM. Now GraalJS has focused on latest JS spec support (ecmascript 2022) and correctness. For the frontend, they have not yet hugely focused on optimizing its output for the backend. Despite this, GraalJS JIT is already quite competitive with V8 JIT in general (although extensive benchmarking is yet to be done), however it takes more time for the JIT to warmup (but they are working on it). As for the AOT mode in general, it is task dependent, in some tasks it can outperform the JIT, and it has zero warmup so it's a tradeoff. Now that GraalJS has finally reached good JS support, they will probably be able to optimize the frontend in the next few years to outperform V8 regarding throughput, just like it has already be done for R, Ruby, PHP and Java/Kotlin. If Facebook gave to the project some human resources, it would significantly increase the project velocity and momentum/recognition. Then comes a virtuous cycle, every frontend implementation, contributes optimizations to the common backend, such as world record sub-milliseconds max garbage collection pause time with ZGC.
@LifeIsStrange GraalVM is a well known project, it is very impressive, so naturally most people in the industry, especially those working in the compiler and interpreter space, are already well aware of it. I personally have had the privilege of meeting one of the authors at a TC39 meeting.
You suggested that GraalJS might be suitable for mobile devices via AOT compilation to native code. I already have some superficial understanding of GraalVM and of the challenges associated with AOT compilation of JavaScript. I am having difficulty reconciling my, admittedly limited, understanding with your suggestion, that's why I am asking for examples.
Leaving Hermes aside for a moment, I also am having difficulty understanding why using GraalJS is preferable to using v8, for example, as a standalone JSVM (obviously GraalJS has advantages as part of a larger Java ecosystem, but those do not really come into play on mobile devices).
I am having difficulty reconciling my, admittedly limited, understanding with your suggestion, that's why I am asking for examples.
Oops my bad, it was a misconception of me, it seems GraalJS does not (currently) support AOT generation of a binary or bytecode. It supports either interpreted, tiered compilation or full JIT mode. So yes by AOT I was referring to interpreted mode actually. Note that Hermes is AOT (for bytecode) however. Of course JIT is preferable to interpreted or AOT but IOS only allow that for WKwebview JSC. However the Europeean parliament recently made webkit eclusivity and other Apple abuses illegal (starting now?) so that might change for the europeean market. The U.S has currently a similar law (no idea if close of being voted)
why using GraalJS is preferable to using v8
Polyglot interop like nativescript would allow better integration with native OS plugins, or even maybe the ability to write plugins in pure JS. The polyglotism also give access to billion dollars library ecosystems (Java, etc) to react native apps, hence enabling libraries that solve new needs previously unsolved (in any specific domain, including e.g natural language processing) and run those libraries with much faster performance than JS (Java is ~5-6 times faster).
understanding why using GraalJS is preferable to using v8
Besides this point, I believe (claim) that current GraalJS with JIT (so for android) can already give improved performance for some workloads/apps, which is a value proposition in itself. However, as of now, for the average app, it will be slower, hence it should not be made the default.
However in the long term, if given enough human resources, I believe (as shown for other languages) GraalJS has the potential to significantly outperform V8 throughput in the long term, hence making react native apps faster than web. The reason being that OpenJDK or GraalVM are the fastest VM on earth, as can be seen e.g. on the benchmarkgame, the only VM that outperforms Java/Kotlin is C#/.NET. Sure Javascript is not Java and hence cannot reach similar performance, but by leveraging the same SOTA VM, it has the potential to obscolete V8.
Hello,
I'm a JavaScript developer and am trying to wrap my heard around what running Hermes in WASM would mean. For my project I need a secure sandbox for JavaScript (or anything it can be compiled into) to run in the browser. Specifically I need the ability plug into the fetch API (and other APIs that do network requests) and redirect (or block) network requests, and to plugin into the DOM API (for the same reason of finding all GET requests and form targets). Once Hermes runs in WASM, how hard would it be to hook into those APIs?
:)
@artem-v-shamsutdinov I think you are referring to the ability to compile Hermes itself to WASM. That has been working for a long time now, you can try it here: https://hermesengine.dev/playground/ (if you remove the compilation options, it will execute the JavaScript source).
How difficult it would be to hook the APIs you mention is hard to say - we as a team actually know very little about the browser APIs.
Thank you! I tried:
class A { doSomething() { print("hello world") } } new A().doSomething()
And got:
/tmp/hermes-input.js:1:1: error: invalid statement encountered.
class A {
^~~~~
Emitted 1 errors. exiting.
Nevermind, found my answer on:
https://hermesengine.dev/docs/language-features
Classes are (not yet supported). Again thank you!
bump
@tmikov Just wanted to check in here -- are there still plans for webassembly support in Hermes? Any movement beyond "it is not a high priority. I am still waiting for an intern to work on the next stage.
"? Any sort of time frame?
@evelant I am sorry, but unfortunately I don't have any good news to share.
Just checking in again, any support for workers/wasm in hermes would be amazing. We've got a highly complex app and it really struggles to perform adequately on Android especially. If we could offload work to the background without having to write platform specific native code that would be wonderful.
Shipping Wasm bytecode with a RN app is not very efficient, since you will be paying the cost of compilation of Wasm on device, which can be pretty expensive, takes memory, and has worse performance than native. You could compile to native whatever you are compiling to Wasm in the first place. If it can be compiled to Wasm, then it doesn't have platform dependencies and would likely work on all platforms. Wasm is technically an unnecessary intermediary in this case.
With that said, we realize that:
It might make sense for Hermes to provide a solution for compiling Wasm to native as part of the built step, and packaging it to work functionally equivalently to shipping Wasm bytecode at runtime.
That would be great. I think the core of the problem is that there's no easy way to compile and consume cross platform native code in react-native. You either have to write JSI bindings or per platform bridging code neither of which is particularly easy. I was focusing on wasm not specifically because of wasm, but because it can be an easy compile target from many languages and easy to consume from JS. However that could be accomplished, wasm or not, it would be a great boon to apps like ours that bump into the performance limitations of single threaded js.
Entirely separately I have a suspicion that a lot of our performance problems on android may stem from sub-optimal proxy performance in hermes since we rely heavily on proxies for reactive ui updates. I have no idea how to test that theory however. All I know is that the app is 4x or more faster on v8 than hermes on android, but unfortunately react-native-v8 isn't stable enough for us to use at the moment.
Problem
At present it does not look like Hermes has support for running WASM via a
global.WebAssembly
.Solution
Would love to see an option for doing essentially the inverse of this:
https://github.com/react-native-community/jsc-android-buildscripts/blame/4399436b9263992b16070559b81f02fbaa5af67e/scripts/compile/jsc.sh#L58
Additional Context
JS-only polyfills bringing WebAssembly support into the runtime are bound to be slow .. So I was thinking to support WASM running on Android, either a custom-build JSC (but that's not very future-proofed as we're switching to Hermes eventually), or use Hermes with some way to enable WASM, or try to find some react-native module for android to polyfill the lack of a global.WebAssembly. Seems to me that since iOS WebViews support WASM fairly well now, Android demand will be there.
This could aid porting lots of existing apps to React Native, so seems like a good idea.