Closed aquib23 closed 4 years ago
We have a tutorial https://detectron2.readthedocs.io/tutorials/deployment.html that shows how to do conversion.
Hi,
I think this is too late but it might be useful for someone. I have created notebook using colab to convert the detectron2 pytorch model to onnx model. Please comment if there are any queries, thanks.
Please find it here: https://github.com/Nagamani732/colab_files/blob/main/detectron2_pytorch_to_onnx.ipynb
@Nagamani732 Can you run inference with onnxruntime InferenceSession. I encounter with "AliasWithName is not a registered function/op" when run converted onnx model". Can you solve this problem?
@NguyenThanhAI As of now, you will not be able to run using onnxruntime since it is not supported. "The exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime (such as onnxruntime or TensorRT)." Please have a look: https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx
Thank you.
@Nagamani732 can we use the torch.onnx.export
method to export the model and then be able to use it with the onnxruntime?
@augustoolucas Please refer to my previous comment. Thanks
@Nagamani732 yeah, I understand that by using the export_onnx
method we cannot use the onnxruntime. What I would like to know is whether we could use the torch.onnx.export
method from PyTorch and, by doing so, be able to use the onnxruntime.
Edit: ok, I see now that under the hood it's been using torch.onnx.export
already.
I am trying to convert the PyTorch to onnx and passed two different parameters the 1st and 2nd and getting the AssertionError can you tell where am I doing the mistake I hardly find any implementation of the conversion