dnth / yolov5-deepsparse-blogpost

By the end of this post, you will learn how to: Train a SOTA YOLOv5 model on your own data. Sparsify the model using SparseML quantization aware training, sparse transfer learning, and one-shot quantization. Export the sparsified model and run it using the DeepSparse engine at insane speeds. P/S: The end result - YOLOv5 on CPU at 180+ FPS using on
https://dicksonneoh.com/portfolio/supercharging_yolov5_180_fps_cpu/
53 stars 13 forks source link

Export pytorch lite model #7

Closed abdelaziz-mahdy closed 2 years ago

abdelaziz-mahdy commented 2 years ago

I saw that on the colab code the export is to onnx model

Can we export to pytorch lite model?

dnth commented 2 years ago

I saw that it's possible to export to TorchScript, which can also be used in pytorch_lite. But I have not tried it myself.

abdelaziz-mahdy commented 2 years ago

May I ask for what the command should look like as an example

To test it and add it to the package in case someone needs it

dnth commented 2 years ago

The command is exactly like how you'd export a yolov5 model.

We can use the export.py https://github.com/dnth/yolov5-deepsparse-blogpost/blob/main/yolov5-train/export.py

And run

python export.py --weights yolov5s.pt --include torchscript

abdelaziz-mahdy commented 2 years ago

Thank you ❤️