Many quantized hexagon_nn ops has _d32 variant. What is D32 format and how it is different from "flat"?
e.g.
QuantizedAdd_8p8to8 - elementwisely adds Input A and Input B together. (flat format)
0: Input A data (quint8 tensor)
1: Input B data (quint8 tensor)
...
0: Output data (quint8 tensor)
QuantizedAdd_8p8to8_d32 - Elementwise Add; inputs and output are in d32 format
0: Input A data (qint8)
1: Input B data (qint8)
...
0: Output data (quint8)
Sorry for posting my question as an issue
Many quantized hexagon_nn ops has _d32 variant. What is D32 format and how it is different from "flat"? e.g.