webmachinelearning / webnn

🧠 Web Neural Network API
https://www.w3.org/TR/webnn/
Other
397 stars 48 forks source link

ArgMax/Min `selectLastIndex` is not supported on CoreML #652

Closed philloooo closed 4 months ago

philloooo commented 7 months ago

For argmax/min on CoreML: In case of ties, the identity of the return value is not guaranteed.

Current spec seems to indicate if selectLastIndex is not passed, the first index will be chosen. This can't be satisfied by the CoreML backend.

How much does models actually depend on this? Do we have examples where selectLastIndex is useful?

If not, I'd suggest removing this parameter.

fdwr commented 7 months ago

For element value ties, PyTorch chooses the last index whereas Tensorflow chooses the first index, and because WebNN can be called by any front end, WebNN being able to accommodate either is valuable. It sounds like for CoreML, it either chooses:

Can you experiment to see which happens? e.g. Given input [0, 1, -5, 3, -5, 4] and axis = 0, does the reduceMin operation pick -5 from index 2 or 4?

With that info (assuming it is deterministic (b)), it should be possible to (not as efficiently, but possible) to use a transpose along the active dimension followed by an inversion of the indices. So CoreML selected index index 2 with selectLastIndex = false, then when selectLastIndex = true, it's the same as passing the transpose [4, -5, 3, -5, 1, 0] to get index 1 followed by a sub(5, indices) to adjust the index to 4.

How much does models actually depend on this?

🤔 That I'm unsure about, but it could make the difference in a list of equal probabilities between presenting one label to the user vs another. Granted, floating point math is imprecise anyway, and you can't expect the exact same labels in the same version of the same library even between different computers, but it would make a bigger difference on integer inputs.

philloooo commented 7 months ago

During my local testing, argmax always returns the smaller index. Although I don't think we should rely on that behavior since the documentation clearly states it's not guaranteed. @mwyrzykowski do you know if argmax is guaranteed to return smaller index?

mwyrzykowski commented 7 months ago

@philloooo It is not guaranteed and may change

fdwr commented 7 months ago

@philloooo It is not guaranteed and may change

Shoot, so sounds like CoreML's tie breaker behavior is undocumented and nondeterministic. So some options include:

Any others come to mind?

philloooo commented 4 months ago

Given the feedback from @mwyrzykowski that CoreML won't be able to have deterministic behavior. I think the practical options from @fdwr 's list are:

Neither is great, but the second option seems more consistent to me?

mwyrzykowski commented 4 months ago

Neither is great, but the second option seems more consistent to me?

I would also prefer the second option, otherwise we end up with a situation where something may work on one platform and fail on another, which is not great for portability.

fdwr commented 4 months ago

I think the practical options from @fdwr 's list are ... I would also prefer the second option

Well, notice that even with the second option, we still end up in a situation where something may work on one platform and not work as-expected on another, as someone may test on Chromium atop XNNPack/TfLite (which chooses first tiebreaker and may match their expectations) and then test atop Safari via CoreML (which might choose the opposite tiebreaker). It's just that with the 2nd option, we call out that it's "implementation defined" to better set expectations. However, I agree that of those two options, it's better to remove the flag and document that it's indeterminate than to have a flag which is ignored because the backend cannot implement it. Additionally, I'm not aware of any models (yet) where it matters, and we could re-add a tie-breaker-first/last enum in the future if we find a case that matters (assuming we also find a way to emulate it in CoreML). So then...?

dictionary MLArgMinMaxOptions {
  sequence<[EnforceRange] unsigned long> axes;
  boolean keepDimensions = false;
- boolean selectLastIndex = false;
+ MLOperandDataType outputDataType; // https://github.com/webmachinelearning/webnn/issues/653
};

Another decision is what the default tie index should be for backends that support both directions like the DML backend:

  1. first index (TensorFlow, and possibly CoreML too albeit undocumented and not guaranteed)
  2. last index (PyTorch) *update - or maybe not anymore, per Ningxin's comment below

The existing DML backend argMin/argMax implementation would need to pick one or the other. 🤔

philloooo commented 4 months ago

Assumming the tflite backend behave the same way as Tensorflow, then go with 1?

inexorabletash commented 4 months ago

The existing DML backend argMin/argMax implementation would need to pick one or the other. 🤔

We could also flip a coin on each call to pick between the options, so that even on a given platform the behavior is indeterminate. I think we've actually done that for some APIs to try to prevent burn in. But I'm not seriously suggesting it here. :wink:

huningxin commented 4 months ago

@fdwr

2. last index (PyTorch)

If I read it correctly, torch.argmax would return the first index?

NOTE If there are multiple maximal values then the indices of the first maximal value are returned.

A simple experiment result looks like aligning with that

>>> a = torch.tensor([1, 2, 5, 3, 4, 5])
>>> torch.argmax(a)
tensor(2)

Same for torch.argmin.

fdwr commented 4 months ago

If I read it correctly, torch.argmax would return the first index?

@huningxin - Interesting. I wonder if behavior changed at some point, given Lara's comment here: https://github.com/onnx/onnx/pull/2461 Well that makes the default for the DML backend clearer.

Aha, PyTorch breaking change: https://github.com/pytorch/pytorch/pull/42004