Closed sajjadarashhh closed 2 years ago
"DOM access" is a fuzzy term. In one sense, wasm already has "DOM access" today -- wasm just has to usually (but not always) call through JavaScript glue code in order to map the low-level (e.g., i32
) wasm types to the high-level Web IDL types. But there are tools that can help generate these bindings automatically (e.g., in Rust, see the web-sys crate) and for many Web APIs, the performance overhead is amortized by the actual work performed by the Web API callee so there is no problem.
For hot calls, of course, you would want to remove this JS glue code, and for this problem, the component model's high-level types would allow direct calls from components into Web APIs, which could be more-readily optimized by browsers. I can't speak to how long it will take for this native support to get to browsers, but almost assuredly >1 year. In the meantime, I would recommend using JS glue code bindings to access the DOM and filing browser bugs with performance problems on realistic (non-synthetic benchmark) workloads to help browsers prioritize this kind of optimization.
Closing as answered, but feel free to reopen for further discussion.
Thanks @lukewagner the big reason I'm still holding back from web assembly based frontends is due to lack of efficient DOM bindings.
There's a plethora of exciting Rust frontend libraries springing up, but they all usually perform slower than javascript. Should this bottleneck be alleviated <5 years, it would usher in an amazing era of blazing fast and highly interactive and animated web UI.
Fingers crossed :)
@evbo There's a plethora of exciting Rust frontend libraries springing up, but they all usually perform slower than javascript.
That doesn't have anything to do with the DOM bindings, those libraries are simply slow. If you check out some benchmarks, you'll see that wasm-bindgen
, stdweb
, and dominator
(which are all written in Rust) are the same performance as vanilla JS (they're even faster than Inferno and Svelte!). They're 80% faster than React. Rust can handle silky smooth 60 FPS DOM animations no sweat.
The performance of the DOM bindings is very good, so if you're getting slow performance then that means:
The DOM bindings should not be holding back performance.
In general communication between Wasm and JS is very fast. Of course improvements can still be made, but those improvements would just make your program a few milliseconds faster.
In practice, the only time when communicating between Rust and JS is slow is with strings, and that is because Rust needs to do a UTF-8 -> JS string transcoding (which is O(n)
).
If you're waiting for component types to dramatically improve your app's performance, then you will be very disappointed. It won't improve your app's performance, because the performance of Rust->Wasm is already close to optimal.
Instead, if you want to improve your app's performance then you should switch to a different Rust library, or improve your app's code.
"DOM access" is a fuzzy term. In one sense, wasm already has "DOM access" today -- wasm just has to usually (but not always) call through JavaScript glue code in order to map the low-level (e.g.,
i32
) wasm types to the high-level Web IDL types.
@lukewagner : Can you clarify, 2 years later, in what unusual way wasm can avoid calling through JavaScript glue code to access to the Web API stuff like DOM?
For DOM methods, you can Function.prototype.call.bind(...DOM method...)
and import that from wasm with type (func (param $self externref) ...)
, allowing a "direct" call from wasm into the DOM method. There will be some overhead calling through the runtime's generic bind
and call
machinery, but in theory a wasm engine could optimize all that away by recognizing the call+bind pattern (which I've heard browsers mention considering). With the forthcoming js-string-builtins, there's also more-efficient ways to work with DOMString
. The approach taken by js-string-builtins also allows adding more inline fast paths over time for other hot JS or Web primitives (e.g., typed arrays).
I hadn't heard of optimising bind + call, so had a search, this is for WebGPU, but is similar and the linked doc talks about the challenges. Looks quite active. https://issues.chromium.org/issues/41492790
Thanks @yowl, although I'm not yet at a level to comprehend the raw implementation challenges in Blink.
Thanks @lukewagner , I have read about the need for JS glue code (which implies wasm ↔ JS ↔ DOM to me), but not this binding possibility (which implies wasm ↔ DOM, but that needs to be set up by JS). This seems almost close enough to native access, though there are many who want truly native, first-class access to Web APIs for wasm in a manner as good as, or better than available to JS at the moment. A world where <script type="module"/>
can be the only type of <script/>
needed. Can you briefly comment on relevant current work on this, if any?
The component model browser polyfill, jco
(specifically the transpile
command), paints a picture of how wasm could look like a native ESM, extending the ESM-integration proposal from modules to components. Using jco transpile
(or, one day hopefully, a native C-M implementation), you can just load a component via <script type="module"/>
, using import-maps to polyfill any component imports in terms of Web APIs. Ideally, Web APIs would be automatically reflected in the ESM module map, and we had a sketch of how this might look in the get-originals proposal. That has stalled, but once some of the other pieces fall into place, it could be the final step to close the loop, allowing direct component-to-WebIDL calls without JS glue code or polyfills. I expect this is pretty far away, though, but that's fine, because with jco
and import-maps, polyfilling can be very effective.
When we have DOM access inside wasm Features!?