Closed philloooo closed 4 months ago
Following up from the working group meeting, I am happy to explore option 2 if @fdwr can provide examples on when int64 is useful.
I didn't fully understand your gather
example, because I actually don't understand why indices
allow both int64 and uint32. The indices should point to valid indices that's within MLOperand's dimensions right? And the dimensions are uint32...
Mirroring this table here too for historical reference:
API | uint32 | int32 | uint64 | int64 | Link |
---|---|---|---|---|---|
DML | uint32 | int32 | uint64 | int64 | DML_ARGMIN_OPERATOR_DESC |
TF output_type | - | int32 | - | int64 | tf.math.argmin |
CoreML | - | int32 | - | - | reduction.reduce_argmin |
MLIR TOSA | uint32 | - | - | - | tosa.argmax "signless integer" |
ONNX | - | - | - | int64 | ArgMin |
PyTorch | - | - | - | int64 | torch.argmin |
NumPy | - | - | - | int64 | numpy.argmin |
Currently WebNN specifies ArgMax/Min returns int64.
Returning int64 can't be emulated on CoreML:
Given int32 is the intersection across these backends, I see two options to make it work: a. Update argMin/Max to always output int32 b. On the spec level, allow passing a param of
output_type
. And allow probing the supported data types for this param.