Open SyGoing opened 3 years ago
when converting the CenterNet-FPN_R50_1x
[03/20 10:56:30 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all ...
[03/20 10:56:30 d2.data.common]: Serialized dataset takes 19.10 MiB
Traceback (most recent call last):
File "E:/Machine_learning/CenterNet2/projects/CenterNet2/export_model.py", line 188, in
when converting the CenterNet-FPN_R50_1x [03/20 10:56:30 d2.data.common]: Serializing 5000 elements to byte tensors and concatenating them all ... [03/20 10:56:30 d2.data.common]: Serialized dataset takes 19.10 MiB Traceback (most recent call last): File "E:/Machine_learning/CenterNet2/projects/CenterNet2/export_model.py", line 188, in exported_model = export_caffe2_tracing(cfg, torch_model, first_batch) File "E:/Machine_learning/CenterNet2/projects/CenterNet2/export_model.py", line 58, in export_caffe2_tracing tracer = Caffe2Tracer(cfg, torch_model, inputs) File "e:\machine_learning\centernet2\detectron2\export\api.py", line 87, in init C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[cfg.MODEL.META_ARCHITECTURE] KeyError: 'CenterNetDetector'
@SyGoing hi, Have you successfully converted centernet2 to ONNX? if yes, how? thanks!
@SyGoing @lucky-xu-1994 Check this out it could be useful: https://github.com/xingyizhou/CenterNet2/tree/73ff02f2967a87ab4877898e7d6207cde439a8c5/detectron2/export
I am also trying to export a CenterNet2 model to do inference with TensorRT
@SyGoing @lucky-xu-1994 Check this out it could be useful: https://github.com/xingyizhou/CenterNet2/tree/73ff02f2967a87ab4877898e7d6207cde439a8c5/detectron2/export
I am also trying to export a CenterNet2 model to do inference with TensorRT
blank page.
Since the model is trained to deplay on the application, is there any difference between detectron2 's default model export and centernet2 or centernet*