Simple model with GEMM(DQ(Q(input0)), DQ(Q(input1)) quantizing FP32 -> FP8E4M3 fails to run using the CPU EP. It is runnable when using the CUDA EP.
An identical model using FP16 -> FP8E4M3 quantization is able to run with either CPU EP or CUDA EP.
Reported error:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type 'tensor(float8e4m3fn)' of input parameter (/self_attention/proj/TRT_FP8QuantizeLinear_output_0) of operator (QGemm) in node (/self_attention/proj/Gemm) is invalid.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
Simple model with GEMM(DQ(Q(input0)), DQ(Q(input1)) quantizing FP32 -> FP8E4M3 fails to run using the CPU EP. It is runnable when using the CUDA EP. An identical model using FP16 -> FP8E4M3 quantization is able to run with either CPU EP or CUDA EP.
Reported error: onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type 'tensor(float8e4m3fn)' of input parameter (/self_attention/proj/TRT_FP8QuantizeLinear_output_0) of operator (QGemm) in node (/self_attention/proj/Gemm) is invalid.
To reproduce
ort_bug_fp8_fp32_gemm.zip See attached model and script.
Usage: python test_ort.py ort_bug_gemm_fp8_fp32.onnx
Urgency
No response
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.19.2
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response