When I try to run the standard caffe2 export script, I get an error:
(detectron_env_2) sal9000@sal9000-XPS-13-9370:~/Sources/detectron2/tools/deploy$ ./caffe2_converter_guitars.py --config-file /home/sal9000/Sources/detectron2/projects/vovnet-detectron2/checkpoints/MRCN-V2-19-FPNLite-3x/config.yaml --output ./caffe2_model_guitars_lite --run-eval MODEL.WEIGHTS /home/sal9000/Sources/detectron2/projects/vovnet-detectron2/checkpoints/MRCN-V2-19-FPNLite-3x/model_final.pth MODEL.DEVICE cpu
[05/17 15:15:55 detectron2]: Command line arguments: Namespace(config_file='/home/sal9000/Sources/detectron2/projects/vovnet-detectron2/checkpoints/MRCN-V2-19-FPNLite-3x/config.yaml', format='caffe2', opts=['MODEL.WEIGHTS', '/home/sal9000/Sources/detectron2/projects/vovnet-detectron2/checkpoints/MRCN-V2-19-FPNLite-3x/model_final.pth', 'MODEL.DEVICE', 'cpu'], output='./caffe2_model_guitars_lite', run_eval=True)
Traceback (most recent call last):
File "./caffe2_converter_guitars.py", line 81, in <module>
torch_model = build_model(cfg)
File "/home/sal9000/Sources/detectron2/detectron2/modeling/meta_arch/build.py", line 21, in build_model
model = META_ARCH_REGISTRY.get(meta_arch)(cfg)
File "/home/sal9000/Sources/detectron2/detectron2/modeling/meta_arch/rcnn.py", line 32, in __init__
self.backbone = build_backbone(cfg)
File "/home/sal9000/Sources/detectron2/detectron2/modeling/backbone/build.py", line 31, in build_backbone
backbone = BACKBONE_REGISTRY.get(backbone_name)(cfg, input_shape)
File "/home/sal9000/virtualenvs/detectron_env_2/lib/python3.6/site-packages/fvcore/common/registry.py", line 70, in get
"No object named '{}' found in '{}' registry!".format(name, self._name)
KeyError: "No object named 'build_vovnet_fpn_backbone' found in 'BACKBONE' registry!"
I had already inserted a line to add_vovnet_config(cfg), which fixed an earlier error, but I'm not sure how to proceed with this missing backbone error.
P.S. which is the fastest backbone for CPU inference? Eventually I'd like to try putting this model on a mobile device.
When I try to run the standard caffe2 export script, I get an error:
I had already inserted a line to
add_vovnet_config(cfg)
, which fixed an earlier error, but I'm not sure how to proceed with this missing backbone error.P.S. which is the fastest backbone for CPU inference? Eventually I'd like to try putting this model on a mobile device.