aim-uofa / AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
https://git.io/AdelaiDet
Other
3.39k stars 652 forks source link

convert onnx or TensorRT model #31

Open y200504040u opened 4 years ago

y200504040u commented 4 years ago

Hi~ @tianzhi0549

I trained Fcos-vovnet39 with CrowdHuman dataset, and got a not bad result. Now I want to convert my pytorch model to an onnx model or a TensorRT model. I read the detectron2 documents. https://detectron2.readthedocs.io/tutorials/deployment.html

However, it seems that detectron2 only provides support for 3 meta architectures (GeneralizedRCNN, PanopticFPN, RetinaNet). Have you done similar work before? Could you provide some guidance please?

tianzhi0549 commented 4 years ago

@y200504040u We are working on it. Before we release our code, I would suggest that you follow the guidance of exporting RetinaNet in detectron2, which should be very similar.

y200504040u commented 4 years ago

@y200504040u We are working on it. Before we release our code, I would suggest that you follow the guidance of exporting RetinaNet in detectron2, which should be very similar.

OK. Thank you for your advice.

lucasjinreal commented 4 years ago

@tianzhi0549 Where is the documentation of convert Retinanet to onnx?

tianzhi0549 commented 4 years ago

@jinfagang https://detectron2.readthedocs.io/tutorials/deployment.html.

blueardour commented 4 years ago

Hi, I submitted several patches/demos on the converting model to onnx/caffe/ncnn. Verification code (onnxruntime and ncnn) should be available in the newest repo. The revisions are mainly in the onnx subfolder. Feel free to give a feedback. Thanks.

BTW, either Retinanet or FCOS or other model, it is advised to remove the group norm in the head part. Because it is still not support (or badly support) in many frameworks. Beside, I ever replaced the GN with separate BN (no sharing BN), the performance improved consistently on various kinds of models.

lucasjinreal commented 4 years ago

@blueardour Did u converted BlendMask to onnx?

blueardour commented 4 years ago

Not Yet, I need the BN head model before the convertion.

My own trained BN head version model is based on the author's origin repo which is slightly different with this one. I currently have no hand to export the files from the old repo to this repo.

See https://github.com/aim-uofa/AdelaiDet/issues/43

For the Retina-Net, which has no normalization (group norm) in the head, the scripts in the onnx folder might be able to handle it directly.

blueardour commented 4 years ago

new test scripts submit. refer https://github.com/aim-uofa/AdelaiDet/issues/43 for the history

tengerye commented 4 years ago

@blueardour @jinfagang @tianzhi0549 Hi, guys, sorry to interrupt. The current ONNX codes (export_model_to_onnx.py and test_onnxruntime.py) tell me that extra, heavy pre-processing and post-processing are required for the current onnx inference, am I right?

blueardour commented 4 years ago

@tengerye Extra operations only include the subtraction of the mean value for the pre-processing and nms for the post-processing. The former one can be avoided if you train the model without that operation. For the nms, it seems to be a common step in object detection algorithms.

Effort seems to be paid on adding the nms operator in the onnx (https://github.com/onnx/onnx/pull/2193). However, I am not sure whether TensorRT has integrated the function. If not, we might have to leave it as a post-processing.

tengerye commented 4 years ago

@blueardour Hi, thank you for your kind reply. But the original inference includes operations like ResizeShortestEdge, I am not sure if it has already been included in the onnx code?

blueardour commented 4 years ago

In the ONNX code, the resolution is fixed (by specific the width and height options). If the resolution of the target image is pre-known, just generate the onnx model based on that width/height. In this case, no ResizeShortestEdge is required. Otherwise, if you want to process different size images with the same model, it might requires you to add the image resize pre-processing.

tengerye commented 4 years ago

Hi, @blueardour, thank you for your kind reply again. The model assumes the width is bigger than height. What if the width is smaller than height in input image? Should I still keep the ratio and padding it or just resize the input image?

blueardour commented 4 years ago

The onnx convert itself seems not to pose the assumption. If the sorrow comes from the original project, it is better to solve it from the model design perspective. For example, a collaboration operation in original model.