Closed AlaylmYC closed 1 month ago
Ah-ha, @AlaylmYC , I think it is not necessary to export to a onnx
model in this repo.
If you export to a torchscript
model on CPU, processing like the following can solve the problem of device inconsistency. In other words, libtorch
can handle the problem of device switching.
device_type = torch::kCUDA;
module = torch::jit::load(weights);
module.to(device_type);
@AlaylmYC did you resolve the issue?
Hi! I have modified the "export.py to support GPU,but still receive the following error:
Do you have any suggestions on how this issue can be resolved? Thanks!