Open bobbilichandu opened 3 years ago
Got it. Is there any way to use this trt file in an api. Calling from a python function or something similar?
try modify this line:
I am going to use scaled-yolov4-p5, what is the number should i change to? input_shape is 896x896x3, thanks. My computer reboot every time when I run the bin file. GPU is RTX2080Super 8G, CUDA10.2, TensorRT7.1.3.4, torch1.7. It is able to run detect.py.
try modify this line: https://github.com/linghu8812/tensorrt_inference/blob/887cca1487395cc46a23537213201d224600a976/includes/common/common.hpp#L132
I am going to use scaled-yolov4-p5, what is the number should i change to? input_shape is 896x896x3, thanks. My computer reboot every time when I run the bin file. GPU is RTX2080Super 8G, CUDA10.2, TensorRT7.1.3.4, torch1.7. It is able to run detect.py.
I have set to 1000_MiB, case1: scaled-yolov4-p5 + 640x640, it is able to inference. case2: scaled-yolov4-p5 + 896x896, it is still to reboot.
When you are exporting to onnx, you need to specify the img size. So while using the trt engine, it expects the img size needs to be 640*640.
When you are exporting to onnx, you need to specify the img size. So while using the trt engine, it expects the img size needs to be 640*640.
yeah, I know. ''896x896'' means python export_onnx.py -img-size 896
. using the same image shape both export and inference.
Did you edit the config file while exporting?
Facing this issue. Any idea how to solve this?