Closed sourabhyadav closed 3 years ago
@sourabhyadav we don't provide support for torchscript loading or inference, only export.
@glenn-jocher Ok I will raise it as a question to the community,
Hi @sourabhyadav I have a custom implementation of the loading and inference with torchscript, maybe you could check it in here.
are you fix??
Hi @sourabhyadav I have a custom implementation of the loading and inference with torchscript, maybe you could check it in here.
@zhiqwang if it's possible, could you please give a link to your implementation of the loading and inference with torchscript (current link is not available any more)? I need to speed up my inference of custom yolov5 model, but I'm new to CV and I don't know, how to implement it myself(
Hello @zhiqwang, it’s a bit out of topic, but I wanted to ask, if it’s possible to make detection with augment flag using yolort-model? Thanks a lot!
Hi @pugovka91 ,
I'm not sure I understand your meaning correctly. Did you mean the Test-Time Augmentation (TTA), If that's the feature you're concerned about, we don't have this feature implemented yet in yolort
.
@zhiqwang yes, exactly) will be waiting for this feature implementation, thank you!
🐛 Bug
I am facing the below issue when I try to load a saved torchscript model:
To Reproduce (REQUIRED)
Model saving was done using
export.py
file:Run:
Model loading is done like this:
Model loading seems fine. But the issue comes when we try to inference the model: Output:
Environment
Is am I missing something here? Please guide me.