microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
13.59k stars 2.77k forks source link

Can onnxruntime accepts multiple input image size? #8145

Closed huyhoangle86 closed 3 years ago

huyhoangle86 commented 3 years ago

Hi, I'm using Scaled YOLOv4 and I converted my model from pytorch to Onnx and inference with Onnxruntime , but it seems like onnxruntime can only accept input image size that is fixed. When I pass the input image I want the session.run accept multiple image size either 640x640 or sometimes 640x512 or 640x 418 with these image size it will increase the speed of model because I'm using CPU to inference and with the fix input size it will slow down the speed.

Thanks

snnn commented 3 years ago

It depends on model. In general, no if the model supports mini-batch(batch-size>1).

snnn commented 3 years ago

When you put multiple images in a single tensor(aka. multiple dimension array), all the images must have the dimension(aka. tensor shape). This is a general design across all the ML frameworks.

huyhoangle86 commented 3 years ago

@snnn well I mean lets say I inference with list of images in a folder and each time I pass an image in that list to onnx model to inference I want the model to accept multiple image dimensions , for example in the first time I pass 640x512 image dims to onnx to inference and the 2nd time I pass an image with 640x418 dims so this is not going to happen right ? ( i convert model from pytorch to darknet and from darknet to onnx with batch size = 1)

snnn commented 3 years ago

It can. Because your batch size is always 1. Pytorch to ONNX converter can put such steps in the model. Please consult the converter team for more details.

huyhoangle86 commented 3 years ago

@snnn thank for the response, where can I find converter team?

snnn commented 3 years ago

Sorry I don't know. Where did you get the model? Which tool did you use?

huyhoangle86 commented 3 years ago

@snnn the model is from here https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-csp/models (Scaled YOLOv4 Pytorch CSP model) Because I cannot convert directly this model to onnx so I will have to use this function to convert pytorch .pt to darknet .weight https://github.com/WongKinYiu/ScaledYOLOv4/blob/yolov4-csp/models/models.py#L647 from that I convert darknet weight to Onnx using this repo https://github.com/linghu8812/tensorrt_inference/blob/master/Yolov4/export_onnx.py so the convertion steps be like : pytorch -> darknet -> onnx

huyhoangle86 commented 3 years ago

The above step with take input size fixed as 640x640 with batch-size = 1 And if I put different input image size I will get this error so that's why I asked whether I can put multiple input image size, with multiple input image size I can reduce the speed of model by half. image

snnn commented 3 years ago

You can use this model: https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov4

huyhoangle86 commented 3 years ago

@snnn I had a yolov4 version but the team wants to update to Pytorch for further supports and convenience

snnn commented 3 years ago

I'm sorry I can't help on that. I only work on onnx runtime.