yasenh / libtorch-yolov5

A LibTorch inference implementation of the yolov5
MIT License
372 stars 114 forks source link

Export ONNX with CUDA #36

Closed AlaylmYC closed 1 month ago

AlaylmYC commented 3 years ago

Hi! I have modified the "export.py to support GPU,but still receive the following error:

RuntimeError: Input, output and indices must be on the current device

Do you have any suggestions on how this issue can be resolved? Thanks!

zhiqwang commented 3 years ago

Ah-ha, @AlaylmYC , I think it is not necessary to export to a onnx model in this repo.

If you export to a torchscript model on CPU, processing like the following can solve the problem of device inconsistency. In other words, libtorch can handle the problem of device switching.

device_type = torch::kCUDA;
module = torch::jit::load(weights);
module.to(device_type);
yasenh commented 3 years ago

@AlaylmYC did you resolve the issue?