dnth / yolov5-deepsparse-blogpost

By the end of this post, you will learn how to: Train a SOTA YOLOv5 model on your own data. Sparsify the model using SparseML quantization aware training, sparse transfer learning, and one-shot quantization. Export the sparsified model and run it using the DeepSparse engine at insane speeds. P/S: The end result - YOLOv5 on CPU at 180+ FPS using on
https://dicksonneoh.com/portfolio/supercharging_yolov5_180_fps_cpu/
53 stars 13 forks source link

Cannot use Detect.py after use yolov5.transfer_learn_pruned_quantized.md #10

Open srn-source opened 2 years ago

srn-source commented 2 years ago

I use model yolov5m6 from yolov5 after I train and use use yolov5.transfer_learn_pruned_quantized.md I got error

onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(uint8))

What should I do?

I think the problem is that I cannot use --quantized-inputs in detect.py , its only has in annotate.py that why input is not uint8

TDBTECHNO commented 1 year ago
    batch = torch.from_numpy(batch.copy())
    batch = batch.to(args.device)
    batch = batch.half() if args.fp16 else batch.float()

    add these lines on run_model functioon above calling batch
    363-365