dnth / yolov5-deepsparse-blogpost

By the end of this post, you will learn how to: Train a SOTA YOLOv5 model on your own data. Sparsify the model using SparseML quantization aware training, sparse transfer learning, and one-shot quantization. Export the sparsified model and run it using the DeepSparse engine at insane speeds. P/S: The end result - YOLOv5 on CPU at 180+ FPS using on
https://dicksonneoh.com/portfolio/supercharging_yolov5_180_fps_cpu/
53 stars 13 forks source link

No inerence #21

Open TDBTECHNO opened 1 year ago

TDBTECHNO commented 1 year ago

Hi I am training YOLOV5n Model and using annotate.py when i use pytorch model, it detects successfully but when i export it to onnx, no inference at all.

Train cmd: python train.py --cfg ./models_v5.0/yolov5n.yaml --recipe ../recipes/yolov5.transfer_learn_pruned_quantized.md --data data2/data.yaml --hyp data/hyps/hyp.scratch.yaml --weights yolov5n.pt --img 416 --batch-size 64 --optimizer SGD

Export: python export.py --weights runs/train/exp21/weights/best.pt --include onnx --imgsz 416 --dynamic --simplify