webmachinelearning / model-loader

🧪 Model Loader API
https://webmachinelearning.github.io/model-loader/
Other
30 stars 10 forks source link

Is there any code that can be downloaded/imported? #5

Closed kirill-konshin closed 2 years ago

kirill-konshin commented 3 years ago

It does not seem that download links exist anywhere in the draft https://webmachinelearning.github.io/model-loader/ or in the repo. Where can I get the code to run the example?

anssiko commented 3 years ago

@kirill-konshin, thanks for your interest toward this API incubation! This work is early and as such the group has not yet produced a prototype implementation for the Model Loader API.

Another related API you may be interested in, Web Neural Network API (explainer), has a JS polyfill implementation and samples available for web developers to experiment on:

https://github.com/webmachinelearning/webnn-polyfill

https://github.com/webmachinelearning/webnn-samples

Let us know if you have any questions or feedback. We’re particularly interested in your target use cases to make sure these APIs meet web developers’ key needs.

kirill-konshin commented 3 years ago

@anssiko thank you for prompt response.

Quick question though. How is webnn-polyfill related to webml-polyfill — is it different implementation of "same or similar" low level API that can be used to construct a model? Or are they complimentary to each other?

Some feedback. So far I've created a quick demo which utilizes WebNNRunner from webml-polyfill examples, and fed a custom TFLite segmentation model to it. Worked as expected. Took a while to reverse-engineer the examples, given lack of usage examples, but that's OK, they have some JSDoc comments, which is nice.

Basically the code ...

const runner = new CustomRunner();
await runner.doInitialization(modelInfo);
await runner.loadModel(modelInfo);
await runner.compileModel({ ... });
await runner.run({
    src: ...,
    options: { ... },
});
const modelOutput = runner.getOutput();

... produced the same result as the example code of ModelLoader API. The only problem is that Runner has a ton of demo-specific stuff, which is not quite needed for the snippet above. So having some high level API like Model Loader is very much needed for better adoption by broad audience.

Target use cases are zoom-like features: virtual background, denoise, face tracking, etc.

anssiko commented 3 years ago

@kirill-konshin, thanks for your feedback! The group considers your target use cases important.

To answer your question:

The starting point of webnn-polyfill is an import from intel/webml-polyfill@23b7e7d, contributed by Intel to the W3C Community Group. With that, active development of the polyfill for the Web Neural Network API now happens in the W3C Community Group repo: https://github.com/webmachinelearning/webnn-polyfill

The webml-polyfill repo currently hosts in addition to the older version of the polyfill (called webml-polyfill.js), a larger number of examples, demos and benchmarks that make use of (the earlier version of) the polyfill. Intel is interested in helping the W3C community make progress informed by running code and experimentation. We may contribute some additional content mentioned above over to the W3C community-owned repos in the future.

I'll loop in @huningxin to clarify any aspects that I may have missed with respect to the relationship and to share thoughts on the Model Loader API polyfill.

kirill-konshin commented 3 years ago

I see. So basically old webml-polyfill was consumed by newer webnn-polyfill, correct? Looks like code from examples can be used as loader API after some refactoring.

Some more feedback, btw. I also tried to use Onnx model in the above mentioned code, and the model has been compiled into TFJS (I was using regular Chrome, not the patched Chromium). If I understood correctly, if polyfill does not detect native API it always uses TF for actual inference. Have you considered using Onnx.js for Onnx models?

Also I suggest to use TFJS as a dependency injection and not bake it in polyfill distribution. This way end users will be able to provide certain version, and possibly omit parts of TFJS to make final bundle leaner. With DI approach users will also get ability to choose which framework to use for inference. For users who does not care a fully baked bundle may be also distributed.

huningxin commented 3 years ago

Looks like code from examples can be used as loader API after some refactoring.

Exactly, architecture wise, the model-loader could be a polyfill on top of WebNN API.

Have you considered using Onnx.js for Onnx models?

As far as I know, ONNX.js doesn't provide the op level API that is required by webnn-polyfill implementation. However, I think ONNX.js would be a good option for model-loader API itself. @jbingham for comments.

Also I suggest to use TFJS as a dependency injection and not bake it in polyfill distribution.

That's good suggestion. We probably could file an issue in webnn-polyfill repo to track it. Loop in @BruceDai.

kirill-konshin commented 3 years ago

Yes, ONNX.js belongs to loader API.

Also, one more ask.

I've tried to build the NN enabled Chromium, but the build has failed:

dis@xxx chromium-src % gclient sync
WARNING: Your metrics.cfg file was invalid or nonexistent. A new one will be created.
/xxx/.gclient_entries missing, .gclient file in parent directory /xxx might not be the file you want to use.
use_relative_hooks is deprecated, please remove it from DEPS. (it was merged in use_relative_paths)                                      
use_relative_hooks is deprecated, please remove it from DEPS. (it was merged in use_relative_paths)
Syncing projects:  99% (102/103) src/v8                                                       
[0:05:50] Still working on:
[0:05:50]   src/third_party/angle/third_party/VK-GL-CTS/src

[0:05:57] Still working on:
[0:05:57]   src/third_party/angle/third_party/VK-GL-CTS/src
Syncing projects: 100% (103/103), done.                                         
Traceback (most recent call last):
  File "/xxx/chromium-depot-tools/metrics.py", line 267, in print_notice_and_exit
    yield
  File "/xxx/chromium-depot-tools/gclient.py", line 3199, in <module>
    sys.exit(main(sys.argv[1:]))
  File "/xxx/chromium-depot-tools/gclient.py", line 3185, in main
    return dispatcher.execute(OptionParser(), argv)
  File "/xxx/chromium-depot-tools/subcommand.py", line 252, in execute
    return command(parser, args[1:])
  File "/xxxV/chromium-depot-tools/gclient.py", line 2739, in CMDsync
    ret = client.RunOnDeps('update', args)
  File "/xxx/chromium-depot-tools/gclient.py", line 1801, in RunOnDeps
    gn_args_dep.WriteGNArgsFile()
  File "/xxx/chromium-depot-tools/gclient.py", line 1046, in WriteGNArgsFile
    with open(os.path.join(path_prefix, self._gn_args_file), 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/xxx/src/build/config/gclient_args.gni'

I will probably create an issue in appropriate repo, but the ask is to provide ready to use Chromium builds, this would help to attract more developers to test the ecosystem.

huningxin commented 3 years ago

I will probably create an issue in appropriate repo, but the ask is to provide ready to use Chromium builds, this would help to attract more developers to test the ecosystem.

Please do that. @fujunwei and @ibelem for the issue of the Chromium fork.

Please note that the Chromium fork only implements the WebNN foundation spec that is outdated today. As the WebML CG plans to implement a standalone native implementation of the latest spec in webnn-native repo. We are shifting the focus on webnn-native. Once the code is available, you may have interests to check it out.

anssiko commented 3 years ago

@kirill-konshin, webnn-native initial implementation is now in review at https://github.com/webmachinelearning/webnn-native/pull/1

With https://github.com/webmachinelearning/webnn-polyfill/issues/35 tracked separately, do you think this issue could be closed?

jbingham commented 2 years ago

An update: We're planning to submit an "Intent to prototype" soon, and then there will be code.

@anssiko Feel free to close this, since the question is answered, or leave it open until there's code to see. Your choice :)

anssiko commented 2 years ago

Let’s close this when we have a pointer to the Intent to Prototype — just need to remember to drop that pointer here too :)

I think discussion in this issue has been valuable, so thanks @kirill-konshin for your questions and feedback to date! We hope to get your further feedback when there’s soon running code to experiment with.

kirill-konshin commented 2 years ago

I think discussion in this issue has been valuable, so thanks @kirill-konshin for your questions and feedback to date! We hope to get your further feedback when there’s soon running code to experiment with.

I'm happy to be useful.

anssiko commented 2 years ago

Let’s close this when we have a pointer to the Intent to Prototype — just need to remember to drop that pointer here too :)

Here it is (I knew I'd forgot, my bad!): Intent to Prototype: Web Machine Learning: Model Loader API

@kirill-konshin please also check out the meeting minutes for an update on Model Loader API: https://www.w3.org/2022/01/12-webmachinelearning-minutes.html

For any additional feedback on the Model Loader API, feel free to open new issues.