Open shuangyichen opened 4 years ago
@blueardour Hi,
Can onnx/export_model_to_onnx.py work for ABCNet? Looking forward to your reply. Thank you!
Thanks for reporting the requirement. I will pay some effort on the converting.
@blueardour HI, I am tring to convert by myself because the assignment is very urgent. And I am following your work. So could plz give me some hints or explain your pipeline about exporting onnx? Thank you!
Hi, @shuangyichen
For the onnx converting script, one might consider the following aspects:
Example:
OneStageDetector(
(backbone): FPN(
(fpn_lateral3): Conv2d(
512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): NaiveSyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(quant_activation): quantization()
(quant_weight): quantization()
)
(fpn_output3): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): NaiveSyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(quant_activation): quantization()
(quant_weight): quantization()
)
(fpn_lateral4): Conv2d(
1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): NaiveSyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(quant_activation): quantization()
(quant_weight): quantization()
)
(fpn_output4): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): NaiveSyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
just update some progress.
As Group Norm is not well supported in onnx, I'am training the Full-BN version of ABC-net.
In the meanwhile, I am developing the onnx converting code based on early saved model file from above training.
Thank you for your contribution!
Hello @blueardour , Do you know how to deal with BezierAlign before torch.onnx.export? Looking forward to your reply.
hi, @shuangyichen
From my perspective, I suggest not exporting the BezierAlign into onnx. Instead, just export other code with normal operators such as conv/fc/bn/relu/interp. When inference on embeded platform or TenorRT, one might re-implement the BezierAlign code on the dedicated platform.
@blueardour So we just use BezierAlign as the boundary to divide the two parts, simply skip BezierAlign. And convert the two parts into onnx respectively?
Yes. Though it involves with some extra effort, it somehow provides a feasible way.
@shuangyichen Hello, did you manage to write the script to conver ABCnet model to onnx?
@blueardour Hello, did you worked on the script to convert Abcnet model to onnx?
@blueardour Were you able to get the conversion work? Would really love to see how good it will work after deployment. Thanks! :)
Hi @blueardour I'm in the same situation. Do you have some feedback about converting the model to onnx?
@shuangyichen @blueardour blue Could you tell me what progress abcnet to onnx now?
@shuangyichen @blueardour blue Could you tell me what progress abcnet to onnx now?
Is there anyway to export model to caffe model like using detectron2 tools caffe2_converter.py?