The used config file is centermask_lite_V_19_slim_dw_eSE_FPN_ms_4x.yaml.
a. The input to the network whose backbone is a rcnn is a list which contains dictionaries composed of input image data and some other attirbutes. Looks like there's no way to pass similar input for onnx exporting. I added a member function in rcnn.py (detectron2/modeling/meta_arch/rcnn.py) which accepts such input to solve this issue.
b. Based on above, following error rises up during conversion:
RuntimeError: Failed to export an ONNX attribute 'onnx::Sub', since it's not constant, please try to make things (e.g., kernel size) static if possible
I cannot google any clues to the error. Anyone can kindly help? Thanks.
Hi Pals,
I tried to convert the trained pth model to onnx format, but encounter several issues. The code segment are as follows:
model = Trainer.build_model(cfg) state = torch.load(cfg.MODEL.WEIGHTS, map_location=lambda storage, loc: storage) model.load_state_dict(state['model']) model.eval() model.cuda()
must be 32-divisable
dummy_input = torch.randn(1, 3, 448, 448).to("cuda") torch.onnx.export(model, dummy_input, "model.onnx", verbose=True, input_names=['image'], output_names=['pred'])
The used config file is centermask_lite_V_19_slim_dw_eSE_FPN_ms_4x.yaml.
a. The input to the network whose backbone is a rcnn is a list which contains dictionaries composed of input image data and some other attirbutes. Looks like there's no way to pass similar input for onnx exporting. I added a member function in rcnn.py (detectron2/modeling/meta_arch/rcnn.py) which accepts such input to solve this issue.
b. Based on above, following error rises up during conversion: RuntimeError: Failed to export an ONNX attribute 'onnx::Sub', since it's not constant, please try to make things (e.g., kernel size) static if possible
I cannot google any clues to the error. Anyone can kindly help? Thanks.