facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.48k stars 7.48k forks source link

Model conversion Pytorch to ONNX #2087

Closed aquib23 closed 4 years ago

aquib23 commented 4 years ago

I am trying to convert the PyTorch to onnx and passed two different parameters the 1st and 2nd and getting the AssertionError can you tell where am I doing the mistake I hardly find any implementation of the conversion

## Config
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml"))
model = build_model(cfg)

## Loading model
DetectionCheckpointer(model).load("/content/drive/My Drive/Weights/model_final.pth")  

## Conversion ONNX
inputs = torch.randn(1, 1, 1000, 1000)

## trying 1st way of converting
export_onnx_model(cfg, model, inputs)

## trying 2nd way of converting
export_onnx_model(detectron2.export.add_export_config(cfg), detectron2.modeling.build_model(cfg), inputs)
ppwwyyxx commented 4 years ago

We have a tutorial https://detectron2.readthedocs.io/tutorials/deployment.html that shows how to do conversion.

Nagamani732 commented 3 years ago

Hi,

I think this is too late but it might be useful for someone. I have created notebook using colab to convert the detectron2 pytorch model to onnx model. Please comment if there are any queries, thanks.

Please find it here: https://github.com/Nagamani732/colab_files/blob/main/detectron2_pytorch_to_onnx.ipynb

NguyenThanhAI commented 3 years ago

@Nagamani732 Can you run inference with onnxruntime InferenceSession. I encounter with "AliasWithName is not a registered function/op" when run converted onnx model". Can you solve this problem?

Nagamani732 commented 3 years ago

@NguyenThanhAI As of now, you will not be able to run using onnxruntime since it is not supported. "The exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime (such as onnxruntime or TensorRT)." Please have a look: https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx

Thank you.

augustoolucas commented 3 years ago

@Nagamani732 can we use the torch.onnx.export method to export the model and then be able to use it with the onnxruntime?

Nagamani732 commented 3 years ago

@augustoolucas Please refer to my previous comment. Thanks

augustoolucas commented 3 years ago

@Nagamani732 yeah, I understand that by using the export_onnx method we cannot use the onnxruntime. What I would like to know is whether we could use the torch.onnx.export method from PyTorch and, by doing so, be able to use the onnxruntime.

Edit: ok, I see now that under the hood it's been using torch.onnx.export already.