dnth / yolov5-deepsparse-blogpost

By the end of this post, you will learn how to: Train a SOTA YOLOv5 model on your own data. Sparsify the model using SparseML quantization aware training, sparse transfer learning, and one-shot quantization. Export the sparsified model and run it using the DeepSparse engine at insane speeds. P/S: The end result - YOLOv5 on CPU at 180+ FPS using on
https://dicksonneoh.com/portfolio/supercharging_yolov5_180_fps_cpu/
53 stars 13 forks source link

how to do resume of training? should i need to train for 300 epoch to get quantised ,model? #3

Open akashAD98 opened 2 years ago

akashAD98 commented 2 years ago

doing resume=True in train.py solved the problem

akashAD98 commented 2 years ago

should i need to complete all 300 epoch training to get a quantized model? bcz I'm getting an error when i try to export 150 epoch model, & when i resume this training, it adds extra epochs. here you can see i tried it for 300 epochs but when i stopped training & started resuming training its showed 389 epochs

image
akashAD98 commented 2 years ago

getting this issue while converting to onnx, whats wrong here? should i need to do continuous training without stop?

image
akashAD98 commented 2 years ago

even training is not completed ,only 2 epochs remaining ,its giving cuda out of memory ,

image image
dnth commented 2 years ago

Hi @akashAD98

The quantization only happens at the last 2 epochs of the training. This is specified in the recipe file. So if you halt the training before the quantization epoch, you will not get a quantized model.

The screenshot below shows where you can change the quantization epoch.

image

num_epochs is the total number of training epochs.

quantization_start_epoch is the exact epoch where quantization begins.