Closed zydjohnHotmail closed 3 years ago
@zydjohnHotmail export.py argparser arguments explain this: https://github.com/ultralytics/yolov5/blob/621b6d5ba80707ca98242dd7c71d738e5594b41e/export.py#L307
python export.py --weights path/to/best.pt --include onnx --img 432 768
Hello: Thanks for your reply. But I want to know when I trained my dataset, can I add some parameters to set image size? Or I only need to do this when export the ONNX? Thanks,
@zydjohnHotmail solution is fully explained in https://github.com/ultralytics/yolov5/issues/4813#issuecomment-920316203
Hello:
I have tried your command, and use WinML Dashboard v0.7.0 to look at best.onnx. I found the size is: 448 px by 768px. And I used it in my C# program, but there is no object detected.
If I understand it correctly, the size of image is scaled to times of 32, so size 432 is scaled to 448. But this way, the trained model may not work for object detection.
Do I have to scale images from 768 432 to 768 448, then train the model again?
@zydjohnHotmail small adjustments in image size are not going to affect inference results much. If your pytorch model works with detect.py then there's no reason to train anything again unless you're trying to improve upon that result.
Hello: I have tried to scale all images to 768px*448px, and trained the model, and export the ONNX model. In my C# object detect program, I have NOT detected any object. My object is rather simple: I want to detect one logo with specific English words in it. The size is almost the same, but the location inside an image could have 5 or 6 different locations. I trained for 20 images, including all different locations, I think even if my dataset is small, but the trained model should be good enough. You can see one picture. (I want to detect the logo inside the red square) For testing in python, you can see its results: D:\yolov5>python detect.py --weights best.onnx Logo3.png detect: weights=['best.onnx', 'Logo3.png'], source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False YOLOv5 v5.0-430-gaa18599 torch 1.9.0+cpu CPU
image 1/2 D:\yolov5\data\images\bus.jpg: Traceback (most recent call last):
File "D:\yolov5\detect.py", line 293, in
@zydjohnHotmail if your question is related to improving training results see Tips for Best Training Results. If you want to run inference 768, just pass --img 768 to detect.py.
Hello: D:\yolov5>python detect.py --img 768 --weights best.onnx Logo3.png detect: weights=['best.onnx', 'Logo3.png'], source=data/images, imgsz=[768, 768], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False YOLOv5 v5.0-430-gaa18599 torch 1.9.0+cpu CPU
image 1/2 D:\yolov5\data\images\bus.jpg: Traceback (most recent call last):
File "D:\yolov5\detect.py", line 293, in
D:\yolov5> The detect.py only expect the images with the same width and length. What I can do?
@zydjohnHotmail detect.py simply shows one option for ONNX inference, it's assumed for custom requirements you would write your own deployment solution.
OK, thanks!
Hello: I have a small dataset with only 30 images, they all have width of 768 pixels, and height of 432 pixels. I have labelled them (only one label βlogoβ), and trained them with the following command: D:\yolov5>python train.py --img 768 --batch 16 --epochs 30 --data logo768.yaml --weights yolov5s.pt Then I used the following to generate an ONNX model: D:\yolov5>python export.py --weights best.pt --data logo768.yaml --img 768 --batch 1 export: data=logo768.yaml, weights=best.pt, imgsz=[768], batch_size=1, device=cpu, half=False, inplace=False, train=False, optimize=False, dynamic=False, simplify=False, opset=13, include=['torchscript', 'onnx'] YOLOv5 v5.0-430-gaa18599 torch 1.9.0+cpu CPU ONNX: starting export with onnx 1.10.1... D:\yolov5\models\yolo.py:58: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic: ONNX: export success, saved as best.onnx (28.3 MB) ONNX: run --dynamic ONNX model inference with: 'python detect.py --weights best.onnx'
So, I got the ONNX model with file name: best.onnx.
And I downloaded WinML Dashboard v0.7.0 from Microsoft web site. I open the best.onnx with WinML Dashboard, and I see the information about this best.onnx. If I am correct, the best.onnx model assumes all the images have the width and height at 768px. But all my images for training have 768px as width, but have 432px as height. I tried to use best.onnx in C# program to detect objects, but I always got errors, like: [ErrorCode:InvalidArgument] Got invalid dimensions for input: images for the following indices index: 2 Got: 432 Expected: 768 Please fix either the inputs or the model.
Please advise on how I can fix this issue from the best.onnx model? (I am using Windows 10 and Python 3.9.7) Thanks,