Closed philloooo closed 4 months ago
For element value ties, PyTorch chooses the last index whereas Tensorflow chooses the first index, and because WebNN can be called by any front end, WebNN being able to accommodate either is valuable. It sounds like for CoreML, it either chooses:
Can you experiment to see which happens? e.g. Given input [0, 1, -5, 3, -5, 4]
and axis = 0
, does the reduceMin operation pick -5 from index 2 or 4?
With that info (assuming it is deterministic (b)), it should be possible to (not as efficiently, but possible) to use a transpose along the active dimension followed by an inversion of the indices. So CoreML selected index index 2 with selectLastIndex = false
, then when selectLastIndex = true
, it's the same as passing the transpose [4, -5, 3, -5, 1, 0]
to get index 1 followed by a sub(5, indices)
to adjust the index to 4.
How much does models actually depend on this?
🤔 That I'm unsure about, but it could make the difference in a list of equal probabilities between presenting one label to the user vs another. Granted, floating point math is imprecise anyway, and you can't expect the exact same labels in the same version of the same library even between different computers, but it would make a bigger difference on integer inputs.
During my local testing, argmax always returns the smaller index. Although I don't think we should rely on that behavior since the documentation clearly states it's not guaranteed. @mwyrzykowski do you know if argmax is guaranteed to return smaller index?
@philloooo It is not guaranteed and may change
@philloooo It is not guaranteed and may change
Shoot, so sounds like CoreML's tie breaker behavior is undocumented and nondeterministic. So some options include:
min(argMin(x), sub(axisLength, transpose(argMin(x), ...))
or max(argMin(x), sub(axisLength, transpose(argMin(x), ...))
to respectively select the lowest or the highest index. Though, this degree of overhead for an uncommon case warrants a potential 3rd enum of "don't care". So rather than the current boolean select-first-index-upon-ties
and select-last-index-upon-ties
, you'd also have select-indeterminate-index-on-ties
, which should probably just be the default value unless the caller needs one over the other. The quandary here is what happens for a three-way tie? If you have argMax([2,1,1,1,3])
, does it return index 1, 2, or 3? If 1 or 3, the above code would work, but if ever 2 (the middle index), then I see no way to even emulate it.Any others come to mind?
Given the feedback from @mwyrzykowski that CoreML won't be able to have deterministic behavior. I think the practical options from @fdwr 's list are:
Neither is great, but the second option seems more consistent to me?
Neither is great, but the second option seems more consistent to me?
I would also prefer the second option, otherwise we end up with a situation where something may work on one platform and fail on another, which is not great for portability.
I think the practical options from @fdwr 's list are ... I would also prefer the second option
Well, notice that even with the second option, we still end up in a situation where something may work on one platform and not work as-expected on another, as someone may test on Chromium atop XNNPack/TfLite (which chooses first tiebreaker and may match their expectations) and then test atop Safari via CoreML (which might choose the opposite tiebreaker). It's just that with the 2nd option, we call out that it's "implementation defined" to better set expectations. However, I agree that of those two options, it's better to remove the flag and document that it's indeterminate than to have a flag which is ignored because the backend cannot implement it. Additionally, I'm not aware of any models (yet) where it matters, and we could re-add a tie-breaker-first/last enum in the future if we find a case that matters (assuming we also find a way to emulate it in CoreML). So then...?
dictionary MLArgMinMaxOptions {
sequence<[EnforceRange] unsigned long> axes;
boolean keepDimensions = false;
- boolean selectLastIndex = false;
+ MLOperandDataType outputDataType; // https://github.com/webmachinelearning/webnn/issues/653
};
Another decision is what the default tie index should be for backends that support both directions like the DML backend:
The existing DML backend argMin/argMax implementation would need to pick one or the other. 🤔
Assumming the tflite backend behave the same way as Tensorflow, then go with 1?
The existing DML backend argMin/argMax implementation would need to pick one or the other. 🤔
We could also flip a coin on each call to pick between the options, so that even on a given platform the behavior is indeterminate. I think we've actually done that for some APIs to try to prevent burn in. But I'm not seriously suggesting it here. :wink:
@fdwr
2. last index (PyTorch)
If I read it correctly, torch.argmax would return the first index?
NOTE If there are multiple maximal values then the indices of the first maximal value are returned.
A simple experiment result looks like aligning with that
>>> a = torch.tensor([1, 2, 5, 3, 4, 5])
>>> torch.argmax(a)
tensor(2)
Same for torch.argmin.
If I read it correctly, torch.argmax would return the first index?
@huningxin - Interesting. I wonder if behavior changed at some point, given Lara's comment here: https://github.com/onnx/onnx/pull/2461 Well that makes the default for the DML backend clearer.
Aha, PyTorch breaking change: https://github.com/pytorch/pytorch/pull/42004
For argmax/min on CoreML:
In case of ties, the identity of the return value is not guaranteed.
Current spec seems to indicate if selectLastIndex is not passed, the first index will be chosen. This can't be satisfied by the CoreML backend.
How much does models actually depend on this? Do we have examples where
selectLastIndex
is useful?If not, I'd suggest removing this parameter.