sorear / smetamath-rs

sorear's Metamath system engine - version 3 Rust
Apache License 2.0
22 stars 6 forks source link

Try to get it running on emscripten/binaryen and port smm-webui #6

Open sorear opened 8 years ago

sorear commented 8 years ago

since we're now 7x faster than SMM2 in single-threaded mode, and using ~ a quarter the memory, it's probably still going to beat the JS version even with the overhead of cross-compiling to JS (asm.js / webasm). threads would help even more, but SharedArrayBuffer probably going to take much longer to get here than the Rust emscripten port. OTOH, having everything in a single 128MiB array will allow us to move execution between the main thread and a worker with a zero-copy transferring postMessage.

sorear commented 8 years ago

It looks like neither rust emscripten nor rust wasm is quite ready for prime time, although both are being actively worked on.

digama0 commented 8 years ago

I think this is moving in the wrong direction. Rust is designed to run on hardware directly. Instead, to provide a web UI it should act as a server, and apache can redirect requests to a specific command line which will produce HTML output. This can be run locally by running a local apache server, and a sufficiently permissioned copy can run in a place like rust.metmath.org.

sorear commented 8 years ago

"X runs on hardware directly" is not a very meaningful statement.

WASM is not that dissimilar from LLVM IR, and there's no moral difference between "Rust -> MIR -> LLVM -> x86 asm -> [CLASSIFIED]" and "Rust -> MIR -> WASM -> x86 asm -> [CLASSIFIED]" (and remember, the buck does not stop at the ISA). I could do PNaCl instead, PNaCl literally is LLVM IR, though adoption has been weak. I'm not religious about compiler middle ends.

The relevant question is how tightly integrated the engine can be with the UI, and how much interactive response (and scalability!) are you forfeiting by requiring all queries to run on a central node. I see no advantage to requiring frequent roundtrips for serious work (casual browsing may be a different matter, especially if people tend to navigate away before downloading 29MB of HTML).

digama0 commented 8 years ago

Okay, I was reading your proposal as transpilation to JS, which introduces a wildcard overhead factor. If you can get something similar with WASM or PNaCl I could see that working.

For casual browsers (and maybe sandbox work) the remote server is sufficient. In these cases the main benefit is as a replacement to the us2 HTML pages with zero build time. For serious authors, the web UI is trying to play proof assistant, with all the complexities that entails. In these cases, you would want to run it as a local server, and at that point it isn't all that different from the other proposed IDE+smm3 proof assistant plans.