Open rcombs opened 2 years ago
Yes, this could be polyfilled via a streaming transformation of the module bytes, as you described. SIMD and bulk memory operations would be straightforward to target for feature detection, but threading and atomics often require more pervasive changes (not to mention shared memories), so would be less suited to feature detection. Using BigInt support involves changing exported method signatures, so it also wouldn't work well with this proposal.
threading and atomics often require more pervasive changes (not to mention shared memories), so would be less suited to feature detection.
I do think this is more doable than it might initially appear:
__atomic_(load|store|compare_exchange|…)_N()
functions could be implemented as multiversioned functions, with the non-threaded versions simply performing the non-atomic equivalent operation (this is fairly common to implement in a dummy <stdatomic.h>
header at the library level, but could be handled compiler-internally, or in a libatomic
equivalent).pthread_
family of functions could fairly simply be implemented as multiversioned functions that simply return ENOSYS
when threading is unavailable; functions that operate on mutexes could behave fairly normally (presumably e.g. returning EDEADLK
if attempting to re-lock an already-locked non-recursive mutex, since we know it can't be held by any other thread in that situation).I'm not sure if there's some particular issue that makes toggleable use of shared memory intractable, but I hope not!
Using BigInt support involves changing exported method signatures, so it also wouldn't work well with this proposal. Hmm, can we not have entirely separate BigInt-accepting functions that become available when BigInt support is present? If not, that's unfortunate, but I suppose we could always polyfill by having JS split a large integer into a sequence of bytes or e.g. 32-bit words.
The discussion I've seen on this proposal has largely been around use with future extensions (notably future additions to SIMD), but there are current extensions that would benefit massively from the functionality being discussed, so I think it's important that the design at least attempt to avoid precluding JS-based polyfills.
For instance, I'd like to extend some open-source libraries (e.g. ffmpeg, libass, OpenSSL…) to use some features made available in some already-broadly-implemented extensions, including:
These features (with the exception of SIMD) are implemented in all major browser engines today, but in the same way that ffmpeg maintains compatibility back to at least Windows XP (and will likely support 7 for quite a while after), we need to support wasm implementations back to MVP. Currently, this would mean providing compile-time flags that consumers must enable to get newer features, which produces an untenable amount of build fragmentation and complexity already. This essentially bars these projects from making use of any of the wasm features that would be required for them to be seriously usable on the web platform.
It seems like this proposal should allow for JavaScript to stream down a wasm module, parse it at a high level, recognize feature blocks, and discard (or replace with no-ops?) anything that isn't supported by the current engine. Correct me if I'm wrong?
If I'm right about this generally being doable, I think the main thing this needs is to assign feature IDs to already-existing features. I suppose even without officially-assigned ones, tooling could always just define its own and shift the rest around them, but having this standardized would be best.