facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.64k stars 7.5k forks source link

[Feature Request] Exporting to Onnx #872

Closed ebeyabraham closed 4 years ago

ebeyabraham commented 4 years ago

🚀 Feature

Detectron2 currently supports export to Caffe2 for deployment through Onnx. Could it be possible to create an api similar to export_caffe2_model which returns an onnx model.

Motivation

Even though Detectron provides conversion to Caffe2 models, there is no way to save the intermediate Onnx model.

cbasavaraj commented 4 years ago

Hey, I've been working on the same thing. When you run python tools/caffe2_converter.py, you'll see that the conversion is done via onnx. So it's kinda trivial to save the onnx model to disk. Specifically, in detectron2/export/caffe2_export.py, after line 61, I add onnx.save(onnx_model, 'model.onnx').

So far so good, I have a bonafide onnx model. My real goal is however to run it in nvidia's deepstream framework for fast GPU deployment (detectron2 team has stated that they are focused only on CPU deployment). Here I run into a problem: deepstream takes onnx as input and converts it to TensorRT, but the parser fails for this onnx model:

Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /home/chandrachud_basavaraj/onnx-ckpts/d2_r101_fpn.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [AliasWithName]:
ERROR: ModelImporter.cpp:147 In function importNode:
[8] No importer registered for op: AliasWithName

I know that AliasWithName operator is used to read in the inputs and later to output a dict. Wondering if removing this operator at input at least can solve the problem. Appreciate any help from the community!

ppwwyyxx commented 4 years ago

As mentioned by @cbasavaraj : it's one line of code to save the onnx model, but even if you do, you won't be able to easily use it for deployment.

So it seems there is no point doing so, unless a way to use it for deployment is developed.

lucasjinreal commented 4 years ago

I believed that one can using tensorrt plugin to solve this, but questions is how many self-defined used inside this onnx model and if all of them necessary or not.

mozheng commented 4 years ago

Hey, I've been working on the same thing. When you run python tools/caffe2_converter.py, you'll see that the conversion is done via onnx. So it's kinda trivial to save the onnx model to disk. Specifically, in detectron2/export/caffe2_export.py, after line 61, I add onnx.save(onnx_model, 'model.onnx').

So far so good, I have a bonafide onnx model. My real goal is however to run it in nvidia's deepstream framework for fast GPU deployment (detectron2 team has stated that they are focused only on CPU deployment). Here I run into a problem: deepstream takes onnx as input and converts it to TensorRT, but the parser fails for this onnx model:

Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /home/chandrachud_basavaraj/onnx-ckpts/d2_r101_fpn.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [AliasWithName]:
ERROR: ModelImporter.cpp:147 In function importNode:
[8] No importer registered for op: AliasWithName

I know that AliasWithName operator is used to read in the inputs and later to output a dict. Wondering if removing this operator at input at least can solve the problem. Appreciate any help from the community!

I do the same step as you. I pay close attention to the question.

ppwwyyxx commented 4 years ago

added in https://detectron2.readthedocs.io/modules/export.html#detectron2.export.export_onnx_model

mpjlu commented 4 years ago

I believed that one can using tensorrt plugin to solve this, but questions is how many self-defined used inside this onnx model and if all of them necessary or not.

You can use caffe2/tensorrt module for caffe2 model deployment。You don't need to add plugins to use TensorRT with Detectron models. https://zhuanlan.zhihu.com/p/122399743

Musbell commented 4 years ago

@ppwwyyxx according to the documentation Export a detectron2 model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime. Post-processing or transformation passes may be applied on the model to accommodate different runtimes.

Can you please highlight how the Post-processing or transformation passes could be done. After exporting to onnx format, i was unable to use it on the intel openvino model optimizer.

I am getting these errors:

Cannot infer shapes or values for node "im_info".

There is no registered "infer" function for node "im_info" with op = "AliasWithName". Please implement this function in the extensions.

rs9899 commented 4 years ago

Can someone help me export detectron2, specially densepose to tensorflow model?

My end goal is to run densepose on JS and hence tensorflowJS seems the closest option.

Thanks

FahriBilici commented 3 years ago

Can someone help me export detectron2, specially densepose to tensorflow model?

My end goal is to run densepose on JS and hence tensorflowJS seems the closest option.

Thanks

Hello i am searching use my model on tensorflow js too. Did you solved that?