Open jiyoungAn opened 2 years ago
Currently this operator only supports float 32
https://github.com/onnx/onnx/blob/main/docs/Operators.md#NonMaxSuppression
You are more than welcome to implement a float16 version.
Another solution is to exclude the operator in float16 conversion using op_block_list=['NonMaxSuppression']
in the following function:
https://github.com/microsoft/onnxconverter-common/blob/0a401de9ee410bf3f65fb3dd3d13d4eab7e91a10/onnxconverter_common/float16.py#L91
Hi everyone,
I quantized yolov3-tiny model with float16 and run the model in onnxruntime
But I have this issue : InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./tiny_yolov3_fp16.onnx failed:This is an invalid model. Type Error: Type 'tensor(float16)' of input parameter (yolo_evaluation_layer_1/concat_6:0_btc) of operator (NonMaxSuppression) in node (yolonms_layer_1/non_max_suppression/NonMaxSuppressionV3) is invalid.
Is there way I can fix this non_max_suppresion problem? I would really appreciate it if you give me any idea. Thank you for your time.
model file : tiny_yolov3_fp16.zip