NVIDIA-AI-IOT / cuDLA-samples

YOLOv5 on Orin DLA
Other
180 stars 17 forks source link

Implementation of batch size inference #25

Closed ou525 closed 6 months ago

ou525 commented 8 months ago

Thanks for sharing open source。 While there are differences in batch size inference performance mentioned in the readme, I noticed in the inference code only the provision of multiple batch variables for input images. I haven't seen the implementation of multi-batch for subsequent inference and decoding results. Is there any plan for future support?

lynettez commented 6 months ago

Sorry, there is no plan for that since this sample aims to show how to use cuDLA API and DLA QAT workflow. Thanks for the feedback, anyway~