Open Nuno-Mota opened 2 years ago
@Nuno-Mota I'm not too familiar with ONNX but is there a reason you are JIT-scripting the model prior to exporting it?
The intended way is to do something like:
import torch
from torchvision.models.detection import *
model = maskrcnn_resnet50_fpn(weights_backbone=None)
model.eval()
example_image = torch.rand((3, 800, 1000))
torch.onnx.export(
model,
[example_image],
"test.onnx",
opset_version=11,
)
This works fine in the latest version.
@datumbox, the idea is to try to preserve dynamic control flow, as mentioned in the docs.
@Nuno-Mota thanks for clarifying. As I said, I'm not too familiar with ONNX and I'm trying to understand the status of the support from the existing tests. Upon investigating, I saw that we don't test against the jit-scripted versions which means according to the quoted doc, that we actually trace the model.
@fmassa Do you have any context concerning this choice? Is this deliberate? As far as I understand the detection models are not traceable due to their loops.
@Nuno-Mota I have the same issue, but this time with FasterRCNN. Have you found a solution, please?
Same issue in FasterRcnn convertion, any update?
@medric49 @RunnerZhong I found a solution if you are using the pretrained FasterRcnn avaliable from pytorch. It involves loading the scripted model, extracting the weights, applying them to the pretrained model, and then converting to onnx.
assuming your model is made from a template similar to this:
from torchvision.models.detection import fasterrcnn_resnet50_fpn_v2
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
def get_model(num_classes):
frcnn_model = fasterrcnn_resnet50_fpn_v2(weights='COCO_V1')
in_features = frcnn_model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
frcnn_model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return frcnn_model
Then you can create an onnx exportable model by doing the following:
import torch
state_dict = torch.load("jit_model.pt").state_dict()
model = get_model(n) # n being number of output classes
model.load_state_dict(state_dict)
torch.onnx.export(model) # plus whichever params you want here.
Just came across this, I can repro the same issue (same error message @Nuno-Mota reported) using torchvision 0.18.1
. Any updates/further solution attempts to run torch.jit.script
on the models?
🐛 Describe the bug
While attempting to create an ONNX version of Maskrcnn, starting from a
ScriptModule
, an error occurs, indicating that__torch__.torchvision.models.detection._utils.BoxCoder
is an unknown type.MWE:
Error traceback:
Unfortunately, I cannot test with a more recent version. Is this something that has been fixed recently?
Versions
cc @neginraoof