webmachinelearning / webnn

🧠 Web Neural Network API
https://www.w3.org/TR/webnn/
Other
398 stars 48 forks source link

Synchronously validate input operands/activations #572

Closed inexorabletash closed 6 months ago

inexorabletash commented 9 months ago

When calling a builder method on MLGraphBuilder that takes an MLOperand or MLActivation as an input (i.e. anything but input() and constant()), there is no check that the input is from the same builder.

The only place in the spec that does this is validate MLOperand steps that take an operand and builder.

This algorithm is only called in:

@a-sully raised this in #552 and it was previously mentioned (but without specifics) in #234

Should failure here be synchronous? This is implied by the behavior of concat() and the open ended "If any of the following sub-steps fail, ..." text in the builder methods. I don't see a reason for async failure (except in build itself, since you can't synchronously fail a method that returns a promise) so I'd strongly prefer synchronous failure. (If async, the build() should be augmented to traverse the whole graph.)

Assuming we want synchronous failure we could spec this in a few ways:

Either way, concat() needs to be aligned with the other methods (fix to pass this.[[builder]] or remove redundant steps)

This also raises the question of what exception type it should be:

Additionally... is there anything other than "same builder" that's not currently validated, but that should be?

inexorabletash commented 9 months ago

If we wanted to go the "define connect" route, it might be something like:

To connect an MLOperand input as an input to MLOperand target:

  1. If input.[[builder]] is not the same as target.[[builder]], return failure.
  2. (any other validation?)
  3. Add input.[[operand]] to target.[[operand]]'s inputs.

To connect an MLOperand output as an output to MLOperand target:

  1. If output.[[builder]] is not the same as target.[[builder]], return failure.
  2. (any other validation?)
  3. Add output.[[operand]] to target.[[operand]]'s outputs.

Some "connect" phrases currently reference the MLOperand, and some reference the platform operand instead; referencing the MLOperands consistently would simplify things.

inexorabletash commented 9 months ago

We'll want something similar for MLActivations, i.e. either an explicit step or by defining steps for "register it as activation".

huningxin commented 9 months ago

Some "connect" phrases currently reference the MLOperand, and some reference the platform operand instead

When "connect" references the platform operand, such as relu(input) "3. Connect input.[[operand]] as input to opImpl.".

I suppose here it means a process of calling the native platform API to "connect" the platform operand to platform operator as input (or output). Actually, I think it should be a sub-step of "2. Make a request to the underlying platform to:".

referencing the MLOperands consistently would simplify things.

Agreed. If we go this route, I think we should also define the "operator" concept within the spec. For a WebNN graph, MLOperands are edges, they need to be connected to nodes (operators), to construct a graph.

a-sully commented 9 months ago

I think it might be useful to think of these validation steps in tiers:

  1. Platform-agnostic, op-agnostic checks

Currently this is specified as validate MLOperand and includes checks like that MLOperands must be created from the same MLGraphBuilder to be connected; though as @inexorabletash hinted at, there may be more steps we want to throw into this bucket

  1. Platform-agnostic, op-specific checks

These steps should also be run (synchronously) for each MLGraphBuilder method. This includes checks like ensuring that constraints such as broadcasting requirements are met and the passed-in MLActivation is valid for the given op, as well as that the shapes expected by the MLOperands being connected are valid.

  1. Platform-specific, op-specific checks

This is where we finally talk to the platform API:

I suppose here it means a process of calling the native platform API to "connect" the platform operand to platform operator as input (or output)

I propose that (1) and (2) should be run (synchronously) for each MLGraphBuilder method (and should throw a TypeError), while (3) should be deferred until (asynchronously) on build(). If we agree on this, then we should update each MLGraphBuilder method to follow this (very rough) patten:


The foo(input, options) method steps are:

  1. Validate an MLOperand with a builder + input + this (else reject with TypeError)

  2. Validate foo given options:

    1. Check options are valid (else reject with TypeError)
    2. etc...
  3. Connect with input

    1. Check shapes are valid (else reject with TypeError)
    2. etc...
    3. Add to input's outputs

      Or maybe:


      The foo(input, options) method steps are:

  4. Validate an MLOperand with a builder + input + this (else reject with TypeError)

  5. Validate foo given input and options:

    1. Check options are valid (else reject with TypeError)
    2. Check shapes are valid (else reject with TypeError)
    3. etc...
  6. Connect with input

    1. etc...
    2. Add to input's outputs

Ideally (though there might be some quirks which make this not feasible) none of the MLGraphBuilder methods would require implementation-defined behavior, and all the "underlying platform" steps are pushed to build() (which currently also contains "connect" steps)

Thoughts?

huningxin commented 9 months ago

@a-sully , your proposal sounds good to me.

We may also consider validating the input' data type synchronously at build method, that should be op-specific check, as being captured by https://github.com/webmachinelearning/webnn/issues/283.

inexorabletash commented 9 months ago

FYI, we asked around for precedent for labeling blocks of substeps. https://html.spec.whatwg.org/multipage/webappapis.html#update-the-rendering has some examples (italics before a clause), and our handy spec experts are not opposed to doing so if it improves readability. I like the idea of the "validate" and "connect" blocks, maybe "calculate output shape" as a block as well.

huningxin commented 9 months ago

maybe "calculate output shape" as a block as well.

+1

inexorabletash commented 9 months ago

Building on the conversation above, here's a sketch of something concrete. If this looks like a decent starting point I can put up a PR for detailed nitpicks.

Programming Model / Overview

Replace bits about operands with the following:

The MLGraph interface represents a compiled computational graph that is immutable (that is, a model).

The MLGraphBuilder interface serves as a builder (factory) to construct the computation graph that is then compiled to create an MLGraph.

In WebNN, the computational graph is composed of operators which act on data, and are the nodes of the graph. An operator's input is one or more MLOperands representing the data flow for the computation, and are the edges of the graph. Operands include input values, constants (including trained weights), and intermediate values (often referred to as activations). An operator's output is one or more MLOperands, representing the output values of the computation. Operators have operator-specific parameters that control their behavior, which can include zero or more activation functions, which are MLActivations. The operations have functional semantics, with no side effects. Each operation invocation conceptually returns a distinct new value, without changing the value of any other MLOperand.

A key part of the MLGraphBuilder interface are methods such as gemm() and softmax() which create an operator which represents actual operation to perform on the input data when the computation is run, and return a new MLOperand or MLActivation holding the operator. Methods that create an MLOperand connect any inputs and activations to the operator.

Insert the following near the build() description:

The MLGraph is composed of platform operators and platform operands which correspond to the operators and MLOperands, but are not script-visible and MAY be compositions or decompositions of the graph as constructed by script.

MLActivation interface

Update to slot description:

[[operator]] of type operator Reference to MLActivation's corresponding operator.

MLOperand interface

Update internal slot description:

[[operator]] of type operator Reference to MLOperand's corresponding operator.

And delete [[operand]]

Standard MLGraphBuilder Operand-vending Method Structure

The whatever(input, options) method steps are:

  1. Validate arguments:
    1. If input.[[builder]] is not this, then throw a "DataError" DOMException.
    2. If options.scales does not exist, set it to the list « 1.0, 1.0 ».
    3. If options.activation exists and options.activation.[[builder]] is not this, then throw a "DataError" DOMException.
  2. Calculate the output shape:
    1. Let outputDescriptor be a new MLOperandDescriptor.
    2. Set outputDescriptor.dataType to input’s dataType.
    3. Set outputDescriptor.dimensions to the result of unidirectionally broadcasting the shapes input’s shape and options.newShape. If that returns failure, then throw a "DataError" DOMException.
  3. Make graph connections:
    1. Let op be an operator for the "whatever" operation, given options.
    2. Let output be the result of creating an MLOperand given this and outputDescriptor.
    3. If options.activation exists, then add it to op's activation functions.
    4. Set output.[[operator]] to op.
    5. Set op's input to input.
    6. Set op's output to output.
  4. Return output.
a-sully commented 9 months ago

This looks great to me!

There are a couple of things I notice about this framework which we should follow up on elsewhere:

huningxin commented 9 months ago

Thanks @inexorabletash ! The proposal looks great to me.

  • These define operator as internal concept

Could you please elaborate a bit how operator internal concept is defined? Is it an internal interface definition for a specific operation? For example, something like Chromium prototype's struct Conv2d mojo definition?

inexorabletash commented 9 months ago

Could you please elaborate a bit how operator internal concept is defined? Is it an internal interface definition for a specific operation? For example, something like Chromium prototype's struct Conv2d mojo definition?

As sketched above, it's just describing the abstract concept of an operator. Like algorithms and internal slots, these spec concept objects are useful for defining behavior, but aren't script visible so only matching the observable behavior is required. But they very often do map 1:1 with implementations. That Conv2d example from the Chromium prototype impl is what this would map to.

inexorabletash commented 8 months ago

Concrete remaining work following #591

And then for these:

... in practice, these end up being empty for many ops, and very intertwined for others; i.e. it's hard to separate argument validation from output shape calculation. Is this actually valuable? Opinions (and PRs) welcome!

inexorabletash commented 6 months ago

For the "remaining work", namely:

My suggestion is: add as needed, e.g. when a method's steps could be broken up to improve readability. But adding to all methods would just be clutter.