migraphx-benchmark / AMDMIGraphX

AMD's graph optimization engine.
https://rocmsoftwareplatform.github.io/AMDMIGraphX/doc/html/
MIT License
0 stars 1 forks source link

MatMulInteger op is partially supported #159

Closed gyulaz-htec closed 6 months ago

gyulaz-htec commented 7 months ago

MatMulInteger op can accept different input types for T1 and T2 inputs, but MIGraphX only accepts in8 input tensors. MIGraphx handles MatMulInteger in two files: parse_matmul.cpp and in quant_dot.hpp. In the second file there is a restriction about that T1 and T2 should be the same type, but it's not the case.

I came across this when tried to compile BERT-Squad int8 onnx zoo model: migraphx-driver compile text/machine_comprehension/bert-squad/model/bertsquad-12-int8.onnx with the changes from this PR Got the following error: what(): /code/AMDMIGraphX/src/include/migraphx/check_shapes.hpp:210: same_type: quant_dot: Types do not match Also the a_zero_point and b_zero_point optional inputs are not handled in MIGraphX. The list of actions required:

gyulaz-htec commented 6 months ago

Fixed in https://github.com/ROCm/AMDMIGraphX/pull/2513