AILab-CVC / YOLO-World

[CVPR 2024] Real-Time Open-Vocabulary Object Detection
https://www.yoloworld.cc
GNU General Public License v3.0
4.66k stars 452 forks source link

onnx导出问题 #161

Open 997897336 opened 7 months ago

997897336 commented 7 months ago

root@zmj:/build/YOLO-World# python deploy/export_onnx.py configs/pretrain/yolo_world_x_dual_vlpan_l2norm_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py pretrained_models/yolo_world_x_clip_base_dual_vlpan_2e-3adamw_32xb16_100e_o365_goldg_cc3mlite_train_pretrained-8cf6b025.pth --custom-text data/texts/obj365v1_class_texts.json
Export ONNX with bbox decoder and NMS ... {'type': 'ImagePoolingAttentionModule', 'embed_channels': 256, 'num_heads': 8, 'image_channels': [320, 640, 640], 'text_channels': 512, 'num_feats': 3} Loads checkpoint by local backend from path: pretrained_models/yolo_world_x_clip_base_dual_vlpan_2e-3adamw_32xb16_100e_o365_goldg_cc3mlite_train_pretrained-8cf6b025.pth Traceback (most recent call last): File "deploy/export_onnx.py", line 169, in main() File "deploy/export_onnx.py", line 135, in main torch.onnx.export( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/init.py", line 305, in export return utils.export(model, args, f, export_params, verbose, training, File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 118, in export _export(model, args, f, export_params, verbose, training, input_names, output_names, File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 719, in _export _model_to_graph(model, args, verbose, input_names, File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 503, in _model_to_graph graph = _optimize_graph(graph, operator_export_type, File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 232, in _optimize_graph graph = torch._C._jit_pass_onnx(graph, operator_export_type) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/init.py", line 354, in _run_symbolic_function return utils._run_symbolic_function(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1057, in _run_symbolic_function symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1011, in _find_symbolic_in_registry return sym_registry.get_registered_op(op_name, domain, opset_version) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/symbolic_registry.py", line 129, in get_registered_op raise RuntimeError(msg) RuntimeError: Exporting the operator einsum to ONNX opset version 11 is not supported. Support for this operator was added in version 12, try exporting with this version.

taofuyu commented 7 months ago

这报错还不够明显吗 囧 --opset 12

wondervictor commented 7 months ago

Please checkout docs/deploy.