Closed olanod closed 3 years ago
At some point, we'll likely have the ability to incorporate existing WebIDL-based APIs into WASI. There are advantages to doing this -- less duplication of effort, greater portability between Web-wasm and non-Web-wasm -- but also some disadvantages -- the Web sometimes has constraints that non-Web use cases don't have.
So we will likely have the option of using something like the Web Neural Network API as-is in WASI. What we'll need is for someone familiar with the domain to take a close look at the API and make some recommendations about whether it makes sense for WASI as-is, or whether it's worth doing something different.
/cc @anssiko
Thanks for the ping. I've brought this issue to the WebML CG's attention.
There will be a W3C confernce in Berlin about these topics on 24-25 March 2020: https://www.w3.org/2020/01/machine-learning-workshop/
I encourage people interested in the intersection of WASI and WebNN to register and submit a position statement. This would make for a great topic for the workshop.
Since this issue was filed, the wasi-nn proposal was created and is making good progress, so future discussion of these APIs can move to that repo.
For reference, I've recently written more about wasi-nn and the current state of things in the following blog posts:
It might sound too early to be talking about such fancy high level APIs but I think a WASI runtime with hardware accelerated neural networks and related algorithms would have a huge immediate market since it's such a hot topic. Probably this is another of those times when copying the web or following it closely is a good thing, the Web Neural Network API(examples) is being defined to provide this APIs to JS developers but might work even better with WASI.
For a personal project related to the industry I work on there's a use case I'd like to try to make it work, a WASI run-time used as the main platform of an autonomous vehicle, developers could use this and other standardized APIs to program general purpose applications that make use of the hardware capabilities of the car like the vision system, they could even use it to develop more critical parts like the driver agent that they sell in some market place so users have the choice of whom will drive them and not the car manufacturer. All this applications require some brain power available in the platform and having access to just the GPU is not enough since now there is specialized hardware for neural network acceleration and would be important to make use of that hw if it's available.
I was thinking of prototyping the idea with my google coral board that has built-in TPU, also have some nvidia jetson xavier at hand that I could use but I have no idea how google or nvidia talk to their hardware at low level, also for starters the implementation could be CPU and GPU only, on the GPU side this feature could build on top of WebGPU for example https://github.com/WebAssembly/WASI/issues/53
That's just my tiny use case but imagine the huge potential and amounts of hype that such a feature could bring, people deploying AI enabled applications in virtually any platform and hardware with ease 🤩