Open dhuib2016 opened 4 years ago
It looks like u use 80 clases , change number of filters 18->255 (filters = (5+clases)*3)
@Ndron Where can I change this?
@gzchenjiajun in your config file , before yolo layers (3 rows) . In my .cfg lines 603,689,776
when I run onnx_to_tensorrt.py, some error appeared like this:
Traceback (most recent call last): File "onnx_to_tensorrt.py", line 226, in main() File "onnx_to_tensorrt.py", line 204, in main trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)] ValueError: cannot reshape array of size 43095 into shape (1,18,13,13)
My running environment : python 2.7 onnx 1.4.1
Did you solve this problem? i have a same question.
@git-manager Did you solve this problem?
overriding the output_shapes on line 171 of onnx_to_tensorrt.py gave me valid results
output_shapes = [(batch_size, 255, 13, 13), (batch_size, 255, 26, 26)] <------- Work Around
Originally posted by @SurionAndrew in https://github.com/zombie0117/yolov3-tiny-onnx-TensorRT/issues/2#issuecomment-540447480
But I don't know why.
@zhzhang747 After tried on my devices, I observed the pattern. The output_shapes pattern maybe [(batch_size, filters, 13, 13), (batch_size, fileters, 26, 26)]
For example, if the classes of the model is 80, the size of trt_output[0] is 43095 ===>( (5+80) x 3 x 13 x 13) , the size of trt_output[1] is 172380 === >((5 + 80) x 3 x 26 x 26)
My own tiny model classes are 6, after runing the inference code, get the parms as follows: size of trt_output[0] is 5576, the same as above ((5+6) x 3 x 13 x13) size of trt_output[1] are 22308. ((5+6) x 3 x 26 x 26).
when I run onnx_to_tensorrt.py, some error appeared like this:
Traceback (most recent call last): File "onnx_to_tensorrt.py", line 226, in
main()
File "onnx_to_tensorrt.py", line 204, in main
trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]
ValueError: cannot reshape array of size 43095 into shape (1,18,13,13)
My running environment : python 2.7 onnx 1.4.1