w3ctag / design-reviews

W3C specs and API reviews
Creative Commons Zero v1.0 Universal
328 stars 55 forks source link

Updated review of WebNN API #933

Closed dontcallmedom closed 3 months ago

dontcallmedom commented 7 months ago

(extracted from https://github.com/w3ctag/design-reviews/issues/771#issuecomment-1911927573)

I'm requesting an updated TAG review of WebNN API - previous TAG review: https://github.com/w3ctag/design-reviews/issues/771

Since the initial Candidate Recommendation Snapshot and the previous TAG review, the Working Group has gathered further implementation experience and added new operations and data types needed for well-known transformers https://github.com/webmachinelearning/webnn/issues/375. In addition, the group has removed selected features informed by this implementation experience: higher-level operations that can be expressed in terms of lower-level primitives in a performant manner, and support for synchronous execution. The group has also updated the specification to use modern authoring conventions to improve interoperability and precision of normative definitions and is developing a new feature, a https://github.com/webmachinelearning/webnn/issues/482, to improve performance and interoperability between the WebNN, WebGPU APIs and purpose-built hardware for ML.

The removal of support for synchronous execution is in-line with TAG's guidance (removal discussed in https://github.com/w3ctag/design-reviews/issues/531 and moving toward JSPI that is coming finally.

Further details:

You should also know that...

[please tell us anything you think is relevant to this review]

We'd prefer the TAG provide feedback as open issues in our GitHub repo for each point of feedback

anssiko commented 7 months ago

We've issued a CfC to advance with the CR Snapshot publication mid-March noting in this CfC the TAG delta review is currently in flight. We expect this issue to be looked at in the context of the transition request.

As outlined in this issue, your earlier feedback (removal of sync APIs) has been addressed. The rest of the changes since your last review are evolutionary informed by implementation experience. Specifically, we are not expecting you to do another "full" review.

If the group doesn't hear any concerns from you it plans to proceed with the publication. Thank you for your review comments (https://github.com/w3ctag/design-reviews/issues/771#issuecomment-1332346304) that motivated the removal of the sync APIs.

matatk commented 6 months ago

Thanks @anssiko, @dontcallmedom for the review request. We were wondering, could you clarify the changes around transformers? We note you've added new data types and operations in support of them (is there a list?) - did you also add/remove any transformers?

dontcallmedom commented 6 months ago

the list of operators added (and the removal of a redundant one) is in https://github.com/webmachinelearning/webnn/pull/478#issue-1988769687 based on the detailed analysis made in https://github.com/webmachinelearning/webnn/issues/375#issuecomment-1674224992

anssiko commented 6 months ago

@matatk you may also find the updated use cases for transformers https://github.com/webmachinelearning/webnn/pull/507 helpful -- these use cases motivated the new ops discussed in the above-mentioned issue, also linked from the SOTD.

To provide further context on the removal: one op (squeeze) was removed from the initial list of considered transformer ops because it was found out it can be expressed in terms of an existing lower-level op (reshape) in a performant manner. The emulation path for squeeze is presented informatively in the specification.

Please let us know if you have any further questions.

anssiko commented 5 months ago

For full disclosure and to close the loop on this review:

A new CR Snapshot (history) was published recently. Thank you for your questions and reviews (plural). We've already received two rounds of reviews from the TAG given we've hit the CRS milestone twice for this spec and appreciate your insights and persistence in working with us as we further evolve this specification. We look forward to another delta review with you as appropriate.

If you have further review comments now or at any time do not hesitate to reach out to our group. We will consider all suggestions regardless of the spec milestone we're targeting. We're currently iterating on CRDs and plan to publish a new CRS approximately every 6-12 months.

matatk commented 5 months ago

Hi @anssiko. Thank you for providing the context and info on recent changes, and for the publishing and cadence info. We are still looking into a few things on this review (noting that the 2024-04-29 version is now the current one, as you mentioned). We'll reply on this thread with any additional thoughts.

matatk commented 4 months ago

We discussed WebNN earlier this week. We're generally happy with the way this is going. However, in previous discussions on this in the TAG, @cynthia expressed a concern regarding the threading approach - that it's possible that an intensive model running on the GPU could disrupt normal site/content rendering, and that would manifest as things like delays in requestAnimationFrame(). Is this something you have considered?

RafaelCintron commented 4 months ago

@matatk and @cynthia with Chromium on Windows, the WebNN context runs on a separate command queue from the Chromium compositor. Depending on the device, the ML work may run on a separate chip than the one which performs 3D work. Even when it runs on the same chip, the ML work is multi-tasked with other work on the system.

As with other web platform features (WebGL, WebGPU, 2D Canvas, CSS blurs, etc) overtaxing the system will eventually affect requestAnimationFrame. Web developers need to be responsible with the content they build.

torgo commented 3 months ago

Hi @RafaelCintron - thanks for this detailed response. We're just discussing in our TAG breakout today. Can we just clarify 2 points:

  1. You say "Chromium on Windows" but does this equally apply to other platforms - particularly mobile platforms? Is there implementation guidance pertaining to this in the spec?

  2. We agree that the performance issues we've highlighted may equally apply to other web technologies such as WebGL as you've pointed out. Since we're building a new technology into the web, we have an opportunity to do something to improve the status quo.The WebGPU spec contains specific language in reference to denial of service attacks which seems related to our concerns. Would it be appropriate for WebNN spec to contain similar platform requirements, or at least point to this part of the WebGPU spec?

reillyeon commented 3 months ago

@torgo, a challenge here is that WebNN supports more than just GPU compute. What @RafaelCintron mentioned makes sense as a concrete mitigation when the MLContext is configured to prefer GPU compute and we need to coordinate with the browser's WebGPU engine anyways for interop purposes (e.g. passing buffers between WebNN graphs and WebGPU shaders). We have the most implementation experience with that scenario when using DirectML on Windows but are actively prototyping with other frameworks such as Core ML on macOS.

When using the CPU or a dedicated ML accelerator the types of potential resource contention and their mitigations are different. I think a general statement similar to WebGPU's reference to denial of service attacks makes sense to add to WebNN as well, with the understanding that exactly how the mitigations work will be implementation- and configuration-dependent. Implementations should use whatever mechanisms are available from the platform (such as the watchdogs mentioned by WebGPU) to prevent sites from using an unfair amount of system resources but in the end these are shared resources and the use of any compute API will affect overall performance on a fully-loaded system.

cynthia commented 3 months ago

Having some guidance in non-normative text, specifically around the different DoS vectors and mitigations would be helpful.

torgo commented 3 months ago

Hi - thanks again for bringing this to us. We appreciate that you've been responsive to our feedback. We still have some concerns, but considering the current status of the work, we are planning to close the current review with a 'satisfied with concerns' label.

Our main concern is: has this API considered the full range of hardware that it might need to run on? We see this running on CPUs without neural processing extensions, GPUs without extensions, CPUs with extensions, GPUs with extensions, and dedicated ML hardware. What steps have you taken to ensure that this runs across all of these targets, considering the range of hardware that exists and might exist?

Our second and related concern is about multi-implementer support. If this is going to be ubiquitous as an approach to "do NN on the web" then it really needs to be implemented across different platforms and different hardware.

We encourage you to consider these issues as the spec and technology continues to evolve.