facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.02k stars 7.41k forks source link

Properly convert a Detectron2 model to ONNX for Deployment #4414

Closed vitorbds closed 2 years ago

vitorbds commented 2 years ago

Hello,

I am trying to convert a Detectron2 model to ONNX format and make inference without use detectron2 dependence in inference stage.

Even is possible to find some information about that here : https://detectron2.readthedocs.io/en/latest/tutorials/deployment.html The implementation of this task is constantly being updated and the information found in this documentation is not clear enough to carry out this task .

Some one can help me with some Demo/Tutorial of how make it ?

@thiagocrepaldi

Some information:

My model was trained using pre-trained weight from:

'faster_rcnn_50': { 'model_path': 'COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml', 'weights_path': 'model_final_280758.pkl' },

I have 4 classes.

Of course now i have my our weight. My model was saved in .pth forrmat.

I used my our dataset, with image ( .png )

Code in Python

thiagocrepaldi commented 2 years ago

Hi @vitorbds See if this example helps you

It exports COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml to ONNX and runs inference with it on ONNX Runtime

___COCO_Detection_faster_rcnn_R_101_FPN_3x

AidenFather commented 2 years ago

Hi @thiagocrepaldi Does the function in the link can handle COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml as well? Can the function export this to ONNX and run inference on ONNX Runtime? Please let me know. Thanks!

thiagocrepaldi commented 2 years ago

Hi @thiagocrepaldi Does the function in the link can handle COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml as well? Can the function export this to ONNX and run inference on ONNX Runtime? Please let me know. Thanks!

Just replaced COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml by COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml and it did run. The graph seemed ok too

___COCO_Detection_faster_rcnn_R_101_FPN_3x

frankvp11 commented 2 years ago

@thiagocrepaldi Sorry to bother you- I don't really understand the link to the function you guys were talking about- all I see is a pull request. Is it possible that you share with me a working script that will export the model properly to onnx? Thanks

thiagocrepaldi commented 2 years ago

@thiagocrepaldi Sorry to bother you- I don't really understand the link to the function you guys were talking about- all I see is a pull request. Is it possible that you share with me a working script that will export the model properly to onnx? Thanks

Hi Frank, the link was a pull request, which unit test do the export you want. All you need is to copy the unit test from the PR and remove/replace the helper functions we use to export and process the results.

Which model are you specifically trying to export? I think I need to work on the deployment documentation a little bit :)

frankvp11 commented 2 years ago

Yeah, so my end goal is that I can export a custom detectron2 model (currently using the balloon example) and then after having exported it to onnx using the export_model.py script in tools directory, to export it again to TensorRT. Do you know if thats possible? Because the conversion on your end is fine but the second I get to the conversion at TensorRT it fails and they tell me that your script is broken and that they can't help me. They gave me a base model that they exported (using your script from a while back im assuming) and Ive been using that but then I ran into issues with the custom model part because your script is necessary for it to work. I can explain more if necessary but I think this would suffice. Also to answer your question - Im trying to do the mask_rcnn_r_50_fpn_3x. @thiagocrepaldi

frankvp11 commented 2 years ago

Would it help if I were to checkout a specific commit? Because right now the errors im getting with TensorRT (which I have reported and gotten ignored with) are that when I use their export script that the list assignment index is out of range, and when I skip this step and try to just create their engine it gives me a unrecognized op. Now I know this isn't really your domain but they told me that the problem was in Detectron2's export script, and I really need that to work in order for this whole project of mine to work

frankvp11 commented 2 years ago

Heres the official output from their commands -create_onnx.py INFO:ModelHelper:ONNX graph loaded successfully INFO:ModelHelper:Number of FPN output channels is 256 INFO:ModelHelper:Number of classes is 1 INFO:ModelHelper:First NMS max proposals is 1000 INFO:ModelHelper:First NMS iou threshold is 0.7 INFO:ModelHelper:First NMS score threshold is 0.01 INFO:ModelHelper:First ROIAlign type is ROIAlignV2 INFO:ModelHelper:First ROIAlign pooled size is 7 INFO:ModelHelper:First ROIAlign sampling ratio is 0 INFO:ModelHelper:Second NMS max proposals is 100 INFO:ModelHelper:Second NMS iou threshold is 0.5 INFO:ModelHelper:Second NMS score threshold is 0.7 INFO:ModelHelper:Second ROIAlign type is ROIAlignV2 INFO:ModelHelper:Second ROIAlign pooled size is 14 INFO:ModelHelper:Second ROIAlign sampling ratio is 0 INFO:ModelHelper:Individual mask output resolution is 28x28 Traceback (most recent call last): File "/content/TensorRT/samples/python/detectron2/create_onnx.py", line 658, in main(args) File "/content/TensorRT/samples/python/detectron2/create_onnx.py", line 637, in main det2_gs.update_preprocessor(args.batch_size) File "/content/TensorRT/samples/python/detectron2/create_onnx.py", line 208, in update_preprocessor del self.graph.inputs[1] IndexError: list assignment index out of range

and then build_engine.py [08/17/2022-17:12:45] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 0, GPU 6061 (MiB) [08/17/2022-17:12:50] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +0, GPU +68, now: CPU 0, GPU 6129 (MiB) build_engine.py:131: DeprecationWarning: Use set_memory_pool_limit instead. self.config.max_workspace_size = workspace * (2 ** 30) [08/17/2022-17:12:52] [TRT] [W] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [08/17/2022-17:12:52] [TRT] [I] No importer registered for op: Mod. Attempting to import as plugin. [08/17/2022-17:12:52] [TRT] [I] Searching for plugin: Mod, plugin_version: 1, plugin_namespace: ERROR:EngineBuilder:Failed to load ONNX file: /content/model.onnx/model.onnx ERROR:EngineBuilder:In node 99 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?" Also @thiagocrepaldi if you want to help- helping solve this issue might be useful for me too. https://github.com/facebookresearch/detectron2/issues/4354

njaouen commented 2 years ago

Hi. I am actually trying to achieve the same type of export for a mask_rcnn (detectron2 -> onnx) or (detectron2 -> pytorch). Did you figure it out ? I am gonna try to extract the necessary part from the provided commit link @thiagocrepaldi As mentioned, it would be amazing if you could find time to work on the deployment doc ;)

frankvp11 commented 2 years ago

Yeah so I managed to get the export working for detectron2 maskrcnn r50fpn to tensorrt, however I have yet to be able to run it in a dockerfile on the Jetson tx2. I am currently working on that. As of right now though, I have a working system that I would be happy to share that works on the Jetson TX2 and a converter on google colab-fully perfected

njaouen commented 2 years ago

Nice, well done. I would love to see the Google Colab converter, to see if that solves it for my case, if that works for you !

frankvp11 commented 2 years ago

https://colab.research.google.com/drive/1ZFdkdIAjD0ldhJ9TEhzTL1bndLyqm2Rd?usp=sharing Thats the colab file- note that I used the balloon dataset so you might need to change it to your preferences. Read through the code aswell before you use it

thiagocrepaldi commented 2 years ago

@frankvp11 great news that you managed to convert the model to onnx.

By looking your notebook, it seems no changes to the original onnx export code on detectron2 was needed. Is that right? The only concern I have is that you used detectron2/tools/deploy/export_model.py --export-method caffe2_tracing, which should not be used if you are not going to use Caffe2 to execute the resulting ONNX model. It can have Caffe2 specific nodes, not understood by other backends, such as ONNX Runtime or eve TensorRT

Did you try --export-method tracing instead?

thiagocrepaldi commented 2 years ago

Hi. I am actually trying to achieve the same type of export for a mask_rcnn (detectron2 -> onnx) or (detectron2 -> pytorch). Did you figure it out ? I am gonna try to extract the necessary part from the provided commit link @thiagocrepaldi As mentioned, it would be amazing if you could find time to work on the deployment doc ;)

COuld you file an issue and send my way? This issue is meant to something else and we probably can closed it as resolved as @vitorbds already confirmed the model he asked is working as expected

frankvp11 commented 2 years ago

I could absolutely try that-however its my understanding that TensorRT anticipated that you would have nodes that wouldnt be understood by TensorRT which is why they created a second converted called create_onnx.py

thiagocrepaldi commented 2 years ago

I could absolutely try that-however its my understanding that TensorRT anticipated that you would have nodes that wouldnt be understood by TensorRT which is why they created a second converted called create_onnx.py

It is possible that TensorRT does not implement all ONNX operators, but my PR allows the export of the model only using standard ONNX operators, so in theory, detectron2 exporter is correct

thiagocrepaldi commented 2 years ago

Would it help if I were to checkout a specific commit? Because right now the errors im getting with TensorRT (which I have reported and gotten ignored with) are that when I use their export script that the list assignment index is out of range, and when I skip this step and try to just create their engine it gives me a unrecognized op. Now I know this isn't really your domain but they told me that the problem was in Detectron2's export script, and I really need that to work in order for this whole project of mine to work

what was the missing op you mentioned? That is probably the root cause. TensorRT not implementing an operator. I could guess it is GridSampler, as it is a new addition to ONNX/Pytorch, so probably to TensorRT too

frankvp11 commented 2 years ago

When I tried to do it with the latest detectron2 and export-method tracing I get an unknown scalar error.

thiagocrepaldi commented 2 years ago

Intriguing. I would expect that from scripting, not tracing. A repro and backtrace would be great

frankvp11 commented 2 years ago

Give me one second to get that for you

frankvp11 commented 2 years ago
[08/25 18:48:21 detectron2]: Command line arguments: Namespace(config_file='/content/output.yaml', export_method='tracing', format='onnx', opts=['MODEL.DEVICE', 'cuda', 'MODEL.WEIGHTS', '/content/output/model_final.pth'], output='/content/model.onnx', run_eval=False, sample_image='/content/new.jpg')
[08/25 18:48:27 d2.checkpoint.c2_model_loading]: Following weights matched with model:
| Names in Model                                  | Names in Checkpoint                                                                                  | Shapes                                          |
|:------------------------------------------------|:-----------------------------------------------------------------------------------------------------|:------------------------------------------------|
| backbone.bottom_up.res2.0.conv1.*               | backbone.bottom_up.res2.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,64,1,1)             |
| backbone.bottom_up.res2.0.conv2.*               | backbone.bottom_up.res2.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,64,3,3)             |
| backbone.bottom_up.res2.0.conv3.*               | backbone.bottom_up.res2.0.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,64,1,1)        |
| backbone.bottom_up.res2.0.shortcut.*            | backbone.bottom_up.res2.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,64,1,1)        |
| backbone.bottom_up.res2.1.conv1.*               | backbone.bottom_up.res2.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,256,1,1)            |
| backbone.bottom_up.res2.1.conv2.*               | backbone.bottom_up.res2.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,64,3,3)             |
| backbone.bottom_up.res2.1.conv3.*               | backbone.bottom_up.res2.1.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,64,1,1)        |
| backbone.bottom_up.res2.2.conv1.*               | backbone.bottom_up.res2.2.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,256,1,1)            |
| backbone.bottom_up.res2.2.conv2.*               | backbone.bottom_up.res2.2.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (64,) (64,) (64,) (64,) (64,64,3,3)             |
| backbone.bottom_up.res2.2.conv3.*               | backbone.bottom_up.res2.2.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,64,1,1)        |
| backbone.bottom_up.res3.0.conv1.*               | backbone.bottom_up.res3.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,256,1,1)       |
| backbone.bottom_up.res3.0.conv2.*               | backbone.bottom_up.res3.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,128,3,3)       |
| backbone.bottom_up.res3.0.conv3.*               | backbone.bottom_up.res3.0.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,128,1,1)       |
| backbone.bottom_up.res3.0.shortcut.*            | backbone.bottom_up.res3.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,256,1,1)       |
| backbone.bottom_up.res3.1.conv1.*               | backbone.bottom_up.res3.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,512,1,1)       |
| backbone.bottom_up.res3.1.conv2.*               | backbone.bottom_up.res3.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,128,3,3)       |
| backbone.bottom_up.res3.1.conv3.*               | backbone.bottom_up.res3.1.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,128,1,1)       |
| backbone.bottom_up.res3.2.conv1.*               | backbone.bottom_up.res3.2.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,512,1,1)       |
| backbone.bottom_up.res3.2.conv2.*               | backbone.bottom_up.res3.2.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,128,3,3)       |
| backbone.bottom_up.res3.2.conv3.*               | backbone.bottom_up.res3.2.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,128,1,1)       |
| backbone.bottom_up.res3.3.conv1.*               | backbone.bottom_up.res3.3.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,512,1,1)       |
| backbone.bottom_up.res3.3.conv2.*               | backbone.bottom_up.res3.3.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (128,) (128,) (128,) (128,) (128,128,3,3)       |
| backbone.bottom_up.res3.3.conv3.*               | backbone.bottom_up.res3.3.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,128,1,1)       |
| backbone.bottom_up.res4.0.conv1.*               | backbone.bottom_up.res4.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,512,1,1)       |
| backbone.bottom_up.res4.0.conv2.*               | backbone.bottom_up.res4.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.0.conv3.*               | backbone.bottom_up.res4.0.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res4.0.shortcut.*            | backbone.bottom_up.res4.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (1024,) (1024,) (1024,) (1024,) (1024,512,1,1)  |
| backbone.bottom_up.res4.1.conv1.*               | backbone.bottom_up.res4.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,1024,1,1)      |
| backbone.bottom_up.res4.1.conv2.*               | backbone.bottom_up.res4.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.1.conv3.*               | backbone.bottom_up.res4.1.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res4.2.conv1.*               | backbone.bottom_up.res4.2.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,1024,1,1)      |
| backbone.bottom_up.res4.2.conv2.*               | backbone.bottom_up.res4.2.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.2.conv3.*               | backbone.bottom_up.res4.2.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res4.3.conv1.*               | backbone.bottom_up.res4.3.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,1024,1,1)      |
| backbone.bottom_up.res4.3.conv2.*               | backbone.bottom_up.res4.3.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.3.conv3.*               | backbone.bottom_up.res4.3.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res4.4.conv1.*               | backbone.bottom_up.res4.4.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,1024,1,1)      |
| backbone.bottom_up.res4.4.conv2.*               | backbone.bottom_up.res4.4.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.4.conv3.*               | backbone.bottom_up.res4.4.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res4.5.conv1.*               | backbone.bottom_up.res4.5.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,1024,1,1)      |
| backbone.bottom_up.res4.5.conv2.*               | backbone.bottom_up.res4.5.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (256,) (256,) (256,) (256,) (256,256,3,3)       |
| backbone.bottom_up.res4.5.conv3.*               | backbone.bottom_up.res4.5.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)  |
| backbone.bottom_up.res5.0.conv1.*               | backbone.bottom_up.res5.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,1024,1,1)      |
| backbone.bottom_up.res5.0.conv2.*               | backbone.bottom_up.res5.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,512,3,3)       |
| backbone.bottom_up.res5.0.conv3.*               | backbone.bottom_up.res5.0.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)  |
| backbone.bottom_up.res5.0.shortcut.*            | backbone.bottom_up.res5.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (2048,) (2048,) (2048,) (2048,) (2048,1024,1,1) |
| backbone.bottom_up.res5.1.conv1.*               | backbone.bottom_up.res5.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,2048,1,1)      |
| backbone.bottom_up.res5.1.conv2.*               | backbone.bottom_up.res5.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,512,3,3)       |
| backbone.bottom_up.res5.1.conv3.*               | backbone.bottom_up.res5.1.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)  |
| backbone.bottom_up.res5.2.conv1.*               | backbone.bottom_up.res5.2.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,2048,1,1)      |
| backbone.bottom_up.res5.2.conv2.*               | backbone.bottom_up.res5.2.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (512,) (512,) (512,) (512,) (512,512,3,3)       |
| backbone.bottom_up.res5.2.conv3.*               | backbone.bottom_up.res5.2.conv3.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}    | (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)  |
| backbone.bottom_up.stem.conv1.*                 | backbone.bottom_up.stem.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight}      | (64,) (64,) (64,) (64,) (64,3,7,7)              |
| backbone.fpn_lateral2.*                         | backbone.fpn_lateral2.{bias,weight}                                                                  | (256,) (256,256,1,1)                            |
| backbone.fpn_lateral3.*                         | backbone.fpn_lateral3.{bias,weight}                                                                  | (256,) (256,512,1,1)                            |
| backbone.fpn_lateral4.*                         | backbone.fpn_lateral4.{bias,weight}                                                                  | (256,) (256,1024,1,1)                           |
| backbone.fpn_lateral5.*                         | backbone.fpn_lateral5.{bias,weight}                                                                  | (256,) (256,2048,1,1)                           |
| backbone.fpn_output2.*                          | backbone.fpn_output2.{bias,weight}                                                                   | (256,) (256,256,3,3)                            |
| backbone.fpn_output3.*                          | backbone.fpn_output3.{bias,weight}                                                                   | (256,) (256,256,3,3)                            |
| backbone.fpn_output4.*                          | backbone.fpn_output4.{bias,weight}                                                                   | (256,) (256,256,3,3)                            |
| backbone.fpn_output5.*                          | backbone.fpn_output5.{bias,weight}                                                                   | (256,) (256,256,3,3)                            |
| proposal_generator.rpn_head.anchor_deltas.*     | proposal_generator.rpn_head.anchor_deltas.{bias,weight}                                              | (12,) (12,256,1,1)                              |
| proposal_generator.rpn_head.conv.*              | proposal_generator.rpn_head.conv.{bias,weight}                                                       | (256,) (256,256,3,3)                            |
| proposal_generator.rpn_head.objectness_logits.* | proposal_generator.rpn_head.objectness_logits.{bias,weight}                                          | (3,) (3,256,1,1)                                |
| roi_heads.box_head.fc1.*                        | roi_heads.box_head.fc1.{bias,weight}                                                                 | (1024,) (1024,12544)                            |
| roi_heads.box_head.fc2.*                        | roi_heads.box_head.fc2.{bias,weight}                                                                 | (1024,) (1024,1024)                             |
| roi_heads.box_predictor.bbox_pred.*             | roi_heads.box_predictor.bbox_pred.{bias,weight}                                                      | (4,) (4,1024)                                   |
| roi_heads.box_predictor.cls_score.*             | roi_heads.box_predictor.cls_score.{bias,weight}                                                      | (2,) (2,1024)                                   |
| roi_heads.mask_head.deconv.*                    | roi_heads.mask_head.deconv.{bias,weight}                                                             | (256,) (256,256,2,2)                            |
| roi_heads.mask_head.mask_fcn1.*                 | roi_heads.mask_head.mask_fcn1.{bias,weight}                                                          | (256,) (256,256,3,3)                            |
| roi_heads.mask_head.mask_fcn2.*                 | roi_heads.mask_head.mask_fcn2.{bias,weight}                                                          | (256,) (256,256,3,3)                            |
| roi_heads.mask_head.mask_fcn3.*                 | roi_heads.mask_head.mask_fcn3.{bias,weight}                                                          | (256,) (256,256,3,3)                            |
| roi_heads.mask_head.mask_fcn4.*                 | roi_heads.mask_head.mask_fcn4.{bias,weight}                                                          | (256,) (256,256,3,3)                            |
| roi_heads.mask_head.predictor.*                 | roi_heads.mask_head.predictor.{bias,weight}                                                          | (1,) (1,256,1,1)                                |
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/image_list.py:85: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert t.shape[:-2] == tensors[0].shape[:-2], t.shape
/usr/local/lib/python3.7/dist-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:155: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size()
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/proposal_generator/proposal_utils.py:79: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1)
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:155: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size()
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/proposal_generator/proposal_utils.py:106: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not valid_mask.all():
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:191: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert torch.isfinite(self.tensor).all(), "Box tensor contains infinite or NaN!"
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:192: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  h, w = box_size
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/layers/nms.py:15: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert boxes.shape[-1] == 4
/usr/local/lib/python3.7/dist-packages/torch/__init__.py:676: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert condition, message
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/layers/roi_align.py:55: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert rois.dim() == 2 and rois.size(1) == 5
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/roi_heads/fast_rcnn.py:138: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not valid_mask.all():
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/roi_heads/fast_rcnn.py:143: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  num_bbox_reg_classes = boxes.shape[1] // 4
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:155: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size()
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:191: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert torch.isfinite(self.tensor).all(), "Box tensor contains infinite or NaN!"
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/structures/boxes.py:192: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  h, w = box_size
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/roi_heads/fast_rcnn.py:155: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if num_bbox_reg_classes == 1:
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/layers/nms.py:15: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert boxes.shape[-1] == 4
/usr/local/lib/python3.7/dist-packages/torch/__init__.py:676: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert condition, message
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/layers/roi_align.py:55: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert rois.dim() == 2 and rois.size(1) == 5
/usr/local/lib/python3.7/dist-packages/detectron2-0.6-py3.7-linux-x86_64.egg/detectron2/modeling/roi_heads/mask_head.py:139: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if cls_agnostic_mask:
Traceback (most recent call last):
  File "/content/detectron2/tools/deploy/export_model.py", line 226, in <module>
    exported_model = export_tracing(torch_model, sample_inputs)
  File "/content/detectron2/tools/deploy/export_model.py", line 132, in export_tracing
    torch.onnx.export(traceable_model, (image,), f, opset_version=STABLE_ONNX_OPSET_VERSION)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py", line 320, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 111, in export
    custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 729, in _export
    dynamic_axes=dynamic_axes)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 501, in _model_to_graph
    module=module)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 216, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py", line 373, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py", line 1032, in _run_symbolic_function
    return symbolic_fn(g, *inputs, **attrs)
  File "/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py", line 1996, in to
    return g.op("Cast", self, to_i=sym_help.cast_pytorch_to_onnx[dtype])
KeyError: 'UNKNOWN_SCALAR'

was the output and this was the command %cd /content/detectron2/ !git checkout 330dd329fc43da11a4ad64148c75a891fd068ae1 !python3 setup.py install

use MODEL.WEIGHTS and add the path to your weights if you are using custom ones - Not working right now

!python /content/detectron2/tools/deploy/export_model.py --config-file /content/output.yaml --output /content/model.onnx --format onnx --sample-image /content/new.jpg --export-method tracing MODEL.DEVICE cuda MODEL.WEIGHTS /content/output/model_final.pth

Same notebook btw

frankvp11 commented 2 years ago

Output seems wierd sorry about that. I can try editing it if you want @thiagocrepaldi

thiagocrepaldi commented 2 years ago

It is fine for now. I Will look into https://github.com/facebookresearch/detectron2/issues/4354 now and will be back to this one next.

If this model is not the same as @vitorbds, please create a new issue and let's close this one. Otherwise, this issue will be a bag of all ONNX issues there is lol

frankvp11 commented 2 years ago

Alright no problem. You can close

thiagocrepaldi commented 2 years ago

@frankvp11 I have a fix/workaround for https://github.com/facebookresearch/detectron2/issues/4354 and could look into the your issue. Did you create a new issue with a straightforward repro? preferrably a python file that I just run and see the problem

If you are using an old torch, I suggest trying master branch or maybe 1.12.1. Also, change your detectron2 code to use latest onnx opset (aka 16) in this file https://github.com/facebookresearch/detectron2/blob/main/detectron2/export/__init__.py#L19

STABLE_ONNX_OPSET_VERSION = 16 # Default was 11
njaouen commented 2 years ago

Hi. I am actually trying to achieve the same type of export for a mask_rcnn (detectron2 -> onnx) or (detectron2 -> pytorch). Did you figure it out ? I am gonna try to extract the necessary part from the provided commit link @thiagocrepaldi As mentioned, it would be amazing if you could find time to work on the deployment doc ;)

COuld you file an issue and send my way? This issue is meant to something else and we probably can closed it as resolved as @vitorbds already confirmed the model he asked is working as expected

Hi. Thanks for your reply. I actually figured it out. Works well with both tracing and scripting with a pytorch inference. I had a bit of work to reshape the 28*28 matrix, but it's working perfectly now.

njaouen commented 2 years ago

https://colab.research.google.com/drive/1ZFdkdIAjD0ldhJ9TEhzTL1bndLyqm2Rd?usp=sharing Thats the colab file- note that I used the balloon dataset so you might need to change it to your preferences. Read through the code aswell before you use it

Thank you very much for sharing. Got me on the right track !!

thiagocrepaldi commented 2 years ago

@vitorbds @ppwwyyxx do you mind closing this one? I believe the original issue was fixed and others are being tracked on separate issues

codermaninturkey commented 1 year ago

hi,i have a model it trained with faster_rcnn_R_101_FPN_3x. i want to transfer to onnx this model. I successfully transfer, but when I make an inference with onnxruntime, I get the following output :

`['output', 'value.11', 'value.7', 'onnx::Split_1073'] ['images']

outputs is [array([], shape=(0, 4), dtype=float32), array([], dtype=int64), array([], dtype=float32), array([ 800, 1067], dtype=int64)]

`

can you help me ?

frankvp11 commented 1 year ago

I think I can try. Let's first establish some things. First things first -> make sure that the picture you are using is indeed a valid picture, ie its got the right size, it has objects that you can detect, and its valid, etc. Then let's talk about the output you are recieving. It's been a while since I've used numpy, and I havent use onnxruntime, so I might be totally useless. However, based on my initial time viewing this, it seems like the numpy array size/shape seems correct, and potentially the output isn't. My suggestion for you would be to re-transfer a new model. Start from scratch essentially and check the reproducability of this error. Then test it again from the newly transfered model. You can also use a service like Netron to make sure that the model "shape" is correct, like the correct layers in the correct positions. If this isn't correct, it can easily be determined that your transfering process is indeed incorrect (most likely). Otherwise, I'd look into other ways to make predictions using onnxruntime (maybe you are doing it wrong, I dont know). You can also check out my article on medium, it might help-> https://medium.com/@frankvanpaassen3/how-to-optimize-custom-detectron2-models-with-tensorrt-2cd710954ad3

If all this fails, I'd suggest opening a new issue, and getting help from actual Detectron2 people.

codermaninturkey commented 1 year ago

thanks frankvp11, i attent your said. i'll read your article.

frankvp11 commented 1 year ago

Did you end up getting it to work?

satishjasthi commented 1 year ago

Hi @frankvp11 , Im trying to export a retina net model trained using detectron2 to ONNX format, it would be great if you can help me how to do this export. Should I use caffe2_tracing or just tracing as export method

frankvp11 commented 1 year ago

In my article I said caffe2 tracing, but I just checked the /samples/python/detectron2 and it says caffe2 is deprecated, so my guess would be regular tracing. I suppose if you have time and willpower your could try both, just make sure if you do to have the right versions for everything (specifically for the caffe2 side)

htlbayytq commented 1 year ago

Hi Githubers, has anyone successfully figured this out?

I tried to convert a Detectron2 model to ONNX for Deployment for a long time, but still didn't successful.

I followed this workflow: TensorRT/samples/python/detectron2 at main · NVIDIA/TensorRT · GitHub 3

But always got an error: KeyError: ‘UNKNOWN_SCALAR’`

It would be great that someone could share some helpful information ^^

frankvp11 commented 1 year ago

Hi @htlbayytq, I was able to figure out how to convert the R-CNN R50 as outlined in the demo from TensorRT. From what I last remember, the panoptic_fpn_R_50_1x is not yet supported. It seems that this page has some relevance, so we'll go from here. I don't know if you read my article, but there I outline how I managed to get it to work, and also it might be worthwhile checking to make sure all the installations are correct and that you have tried everything suggested by actual Detectron2 staff in this issue. If all this still fails, you might have to wait for a facebookresearch rep to come help

codermaninturkey commented 1 year ago

Did you end up getting it to work?

hi frank, sorry it didn't work. I think the reason for this is not because of onnx. Currently, detectron2 already says that it does not officially support onnx transfer. That's why I went for a method change. How can I speed up my Detectron model? I've reached some points regarding this, and for now that seems enough for me. Although Detectron2 models can be transferred as onnx, I could never get them to run with an onnxruntime. I think the detectron team should do a study on this issue and provide this support; Otherwise, I think there is no clear solution.

codermaninturkey commented 1 year ago

Also, why do we need onnx to detect the situation? speed ? reduce file size? In the tests I have done and in my own studies, I have seen that the detectron models can be accelerated and reduced in file size if desired. Just playing around with the ROI layer and image size gives satisfactory results quickly.

frankvp11 commented 1 year ago

As far as that you are unable to get them to run with an onnxruntime, perhaps you could try other optimization services (TensorRT). Also, I agree with you that the support for onnx conversion is severely lacking, however nothing I can do about that. As far as why we use onnx, for me personally was because it was required to make TensorRT engine, however it basically converts it to a difference language (think C++ to python) so that the engine building services like onnxruntime and tensorrt can do what you did (play around with size, etc). Im not an expert though so don't quote me on that. As far as I remember, the things that I did in my article worked ~6 months ago, perhaps you could try going back to those versions if you need.

bouachalazhar commented 1 year ago

I would like to export detectron2 model to OpenVINO because I have Intel Iris Xe graphics and NCS 2. How can I do it ?

frankvp11 commented 1 year ago

Hmm. I think if you follow the instructions as posted in the export directory it should work. However I'm assuming that your here because you got an error while trying to do so? I'm not familiar with either OpenVINO or NCS 2, so if that's the issue you might need to go to their forums (if they exist). As far as converting to openvino, if detectron2 doesn't have a file for it (or openvino doesn't provide a "translater" file I don't think it's possible. It would only be possible if OpenVINO used a pre-established model format provided by detectron2 (caffe2, onnx, torchscript) or created their own

bouachalazhar commented 1 year ago

Hmm. I think if you follow the instructions as posted in the export directory it should work. However I'm assuming that your here because you got an error while trying to do so? I'm not familiar with either OpenVINO or NCS 2, so if that's the issue you might need to go to their forums (if they exist). As far as converting to openvino, if detectron2 doesn't have a file for it (or openvino doesn't provide a "translater" file I don't think it's possible. It would only be possible if OpenVINO used a pre-established model format provided by detectron2 (caffe2, onnx, torchscript) or created their own

You think it isn't possible. For detectron2, I need model with .pt/.yaml/.pth extension and weights with .pt/.pkl/.pth but for OpenVINO it's model.xml and weights.bin. Now, I can export PyTorch model to OpenVINO IR and use my Intel GPU but I can't with detectron2 yet.

datinje commented 1 year ago

hi,i have a model it trained with faster_rcnn_R_101_FPN_3x. i want to transfer to onnx this model. I successfully transfer, but when I make an inference with onnxruntime, I get the following output :

`['output', 'value.11', 'value.7', 'onnx::Split_1073'] ['images']

outputs is [array([], shape=(0, 4), dtype=float32), array([], dtype=int64), array([], dtype=float32), array([ 800, 1067], dtype=int64)]

`

can you help me ?

If this may help , found there is a something weird in detectron2 with operator onnx::Split despite fixing split for variable sizes, found a proble when size is 1 : had to do avoid calling split in this case and just return the original structure before the split wrapped in [] . for ex

So now I can infer with onnxrt CPU and CUDA EP, but not TensorRT EP. I am now having with TensorRT EP an issue with operator onnx:ReduceMax . see https://github.com/pytorch/pytorch/issues/97344

bouachalazhar commented 1 year ago

hi,i have a model it trained with faster_rcnn_R_101_FPN_3x. i want to transfer to onnx this model. I successfully transfer, but when I make an inference with onnxruntime, I get the following output :

`['output', 'value.11', 'value.7', 'onnx::Split_1073'] ['images']

outputs is [array([], shape=(0, 4), dtype=float32), array([], dtype=int64), array([], dtype=float32), array([ 800, 1067], dtype=int64)]

`

can you help me ?

If this may help , found there is a something weird in detectron2 with operator onnx::Split despite fixing split for variable sizes, found a proble when size is 1 : had to do avoid calling split in this case and just return the original structure before the split wrapped in [] . for ex

  • return probs.split(num_inst_per_image, dim=0)

  • return [probs]

did not understand why , but fixed my problem

Note : better update to latest detectron2 (0.6) and latest pytorch (1.13 and 2.0 works)

So now I can infer with onnxrt CPU and CUDA EP, but not TensorRT EP. I am now having with TensorRT EP an issue with operator onnx:ReduceMax . see https://github.com/pytorch/pytorch/issues/97344

Have you the same output with detectron2 and ort? And can you do an inference on CPU/GPU ?

datinje commented 1 year ago

Yes the results are the same (10e-3 or better) on cpu that is validating the model. Don't know yet as tesnorrt EP on GPU does not work.

bouachalazhar commented 1 year ago

Yes the results are the same (10e-3 or better) on cpu that is validating the model. Don't know yet as tesnorrt EP on GPU does not work.

Can you share your code, I would like to test it.

ani-mal commented 1 year ago

@thiagocrepaldi thank you for your work on the detectronv2 to onnx. I was able to run this locally and create an ONNX file for the COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml. However, I am not sure which output is the mask. I was able to identify which output is the pred_class, pred_boxes, and scores. By Iooking at the onnx model, it seems like it has a mask_prob_pred, but I am confused on its shape. The onnx model is returning a Nx1x14x14 ( N= number of classes ) for this value.7 output. While the pred_mask ouput on pytorch for the same model is returning a NxWXH tensor with true/false for each pixel. So I am not sure if the onnx model is returning the mask, and if it is this value.7 I dont understand why the shape is not the same as the one in pytorch. Do you have any insights? Thank you :)

Snag_af7725c Snag_af77a99

thiagocrepaldi commented 1 year ago

Ouch, now you got me. One way to identify what is what is the output order.

The first output on pytorch will be the first output on ONNX. so if the mask is the third output on pytorch, then value.7 is indeed your mask

Detectron2 models when exported to ONNX, however, can have more outputs than pytorch. One reason is that the detectron2 serializes some types, such as boxes and whaetver into a format which includes not only the original data, but also some metadata to deserialize data before returning it to user. These metadata "leaks" to ONNX representation because in the process of onnx export, we export exactly what we see (which is the serialized data with metadata). It is safe to ignore any extra output, though.

Regarding to the difference in shape, there is no 1:1 mapping between Pytorch operators and ONNX operators. Pytorch ops are defined as Meta wants and ONNX tries to do a more generic version that will fit not only Pytorch, but also Tensorflow, MXNet, Caffe2, CNTK, etc. So it is possible that for an operator in torch that outputs NxWxH, ONNX outputs NxCxWxH (with C==1 for grey scale images)

Does it help?

ps: IIRC, the detectron2 config files specifies input/output for the models, so they might be used as a reference to which input/output is what. Just don't quote me on that lol

ani-mal commented 1 year ago

@thiagocrepaldi thank you so much for you speedy and throughout response. I think I figured it out. The output NxCXWXH is not coming from pytorch but rather from a custom type from the detectronv2, so it was a wrong assumption from my side that pytorch model was returning that shape tensor. However, I found some code reference inside the detectronv2 post processor where you can see that the mask expected output is indeed Nx1XMxM:

https://github.com/facebookresearch/detectron2/blob/d779ea63faa54fe42b9b4c280365eaafccb280d6/detectron2/modeling/postprocessing.py#L60-L68

So I just need to implement the post processor here on my side! Yes, your response helps and gets me less worried about those additional outputs. :)

Aaronponceuv commented 11 months ago

Hi. Thanks for your reply. I actually figured it out. Works well with both tracing and scripting with a pytorch inference. I had a bit of work to reshape the 28*28 matrix, but it's working perfectly now.

Hi @njaouen ! I have problems changing the output from 28*28. It would help me a lot to know how you did it, thank you in advance.