Backend \ OS | Windows | Linux |
---|---|---|
null (for unit test) | ||
DirectMLX | ||
OpenVINO | ||
XNNPACK | ||
oneDNN | ||
MLAS |
WebNN-native is a native implementation of the Web Neural Network API.
It provides several building blocks:
webnn.h
that is an one-to-one mapping with the WebNN IDL.webnn.h
WebNN-native uses the code of other open source projects:
depot_tools
WebNN-native uses the Chromium build system and dependency management so you need to install depot_tools and add it to the PATH.
Notes:
DEPOT_TOOLS_WIN_TOOLCHAIN=0
. This tells depot_tools to use your locally installed version of Visual Studio (by default, depot_tools will try to download a Google-internal version).Get the source code as follows:
# Clone the repo as "webnn-native"
> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native && cd webnn-native
# Bootstrap the gclient configuration
> cp scripts/standalone.gclient .gclient
# Fetch external dependencies and toolchains with gclient
> gclient sync
Generate build files using gn args out/Debug
or gn args out/Release
.
A text editor will appear asking build options, the most common option is is_debug=true/false
; otherwise gn args out/Release --list
shows all the possible options.
To build with a backend, please set the corresponding option from following table.
Backend | Option |
---|---|
DirectML | webnn_enable_dml=true |
DirectMLX | webnn_enable_dmlx=true |
OpenVINO | webnn_enable_openvino=true |
XNNPACK | webnn_enable_xnnpack=true |
oneDNN | webnn_enable_onednn=true |
MLAS | webnn_enable_mlas=true |
Then use ninja -C out/Release
or ninja -C out/Debug
to build WebNN-native.
Notes
./scripts/build-local.sh
. For Windows build, it requires supplying -DCMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded$<$.\build.bat --config Release --parallel --enable_msvc_static_runtime
for Windows build.Run unit tests:
> ./out/Release/webnn_unittests
Run end2end tests on a default device:
> ./out/Release/webnn_end2end_tests
You can also specify a device to run end2end tests using "-d" option, for example:
> ./out/Release/webnn_end2end_tests -d gpu
Currently "cpu", "gpu" and "default" are supported, more devices are to be supported in the future.
Notes:
Apache 2.0 Public License, please see LICENSE.