linghu8812 / tensorrt_inference

702 stars 206 forks source link

Is yolov4-csp batched inference supported? #32

Closed LukeAI closed 3 years ago

LukeAI commented 3 years ago

thanks for this great project! I wanted to ask if it's possible to get batch size 4 inference working with yolov4-csp ?

I got yolov4-csp tensorrt inference working fine with the default batch_size=1 but if I set batch_size=3 or 4 inside export_onnx.py then on export I get:

python3 export_onnx.py --cfg_file cfg/y
olov4-csp.cfg --weights_file yolov4-csp.weights --output_file yolov4-csp.onnx                        
Layer of type yolo not supported, skipping ONNX node generation.                                     
Layer of type yolo not supported, skipping ONNX node generation.                                     
Layer of type yolo not supported, skipping ONNX node generation. 

and when running inference (with batch_size=3 set in config-yolov4-csp-3.yaml )

[01/15/2021-09:35:02] [E] [TRT] Parameter check failed at: engine.cpp::execute::806, condition: batchSize > 0 && batchSize <= mEngine.getMaxBatchSize()
Inference take: 0.05127 ms.
execute success
device2host
post process
Segmentation fault (core dumped)
justinkay commented 2 years ago

Hi @LukeAI , how did you resolve this issue?

LukeAI commented 2 years ago

I didn't - I just use yolov4-large-p5 at a slightly lower resolution