open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.61k stars 602 forks source link

[Bug]mask2former instance segmentation model convert to onnx #2569

Closed oracle0101 closed 6 months ago

oracle0101 commented 7 months ago

Checklist

Describe the bug

Hello, the following error occurred when I converted the pth format weight of mask2former training to onnx weight, I would like to know how to solve it

Reproduction

python tools/deploy.py \ configs/mmdet/instance-seg/instance-seg_onnxruntime_dynamic.py \ /root/mmdetection/configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco.py \ /root/autodl-tmp/model_0019999.pth \ /root/autodl-tmp/code/mmdeploy/demo/resources/det.jpg \ --work-dir mmdeploy_models/mmdet/ort \ --device cuda:0 \ --show

Environment

Python                    3.8.10
torch                     1.10.0+cu113             
torchvision               0.11.1+cu113
mmdeploy                  1.3.0
MMdetection               3.1.0
onnx                      1.15.0                   
onnxruntime               1.8.1                    
opencv-python             4.8.0.74

Error traceback

(base) root@autodl-container-f87d1190ac-b0373101:~/autodl-tmp/code/mmdeploy# python tools/deploy.py \
> configs/mmdet/instance-seg/instance-seg_onnxruntime_dynamic.py \
> /root/mmdetection/configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco.py \
> /root/autodl-tmp/model_0019999.pth \
> /root/autodl-tmp/code/mmdeploy/demo/resources/det.jpg \
> --work-dir mmdeploy_models/mmdet/ort \
> --device cuda:0 \
> --show 
11/25 16:53:26 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
11/25 16:53:27 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
11/25 16:53:27 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: /root/autodl-tmp/model_0019999.pth
The model and loaded state dict do not match exactly

unexpected key in source state_dict: model, trainer, iteration

missing keys in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, panoptic_head.pixel_decoder.input_convs.0.conv.weight, panoptic_head.pixel_decoder.input_convs.0.conv.bias, panoptic_head.pixel_decoder.input_convs.0.gn.weight, panoptic_head.pixel_decoder.input_convs.0.gn.bias, panoptic_head.pixel_decoder.input_convs.1.conv.weight, panoptic_head.pixel_decoder.input_convs.1.conv.bias, panoptic_head.pixel_decoder.input_convs.1.gn.weight, panoptic_head.pixel_decoder.input_convs.1.gn.bias, panoptic_head.pixel_decoder.input_convs.2.conv.weight, panoptic_head.pixel_decoder.input_convs.2.conv.bias, panoptic_head.pixel_decoder.input_convs.2.gn.weight, panoptic_head.pixel_decoder.input_convs.2.gn.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.0.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.0.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.0.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.0.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.1.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.1.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.1.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.1.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.2.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.2.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.2.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.2.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.3.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.3.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.3.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.3.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.4.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.4.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.4.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.4.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.5.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.5.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.5.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.5.norms.1.bias, panoptic_head.pixel_decoder.level_encoding.weight, panoptic_head.pixel_decoder.lateral_convs.0.conv.weight, panoptic_head.pixel_decoder.lateral_convs.0.gn.weight, panoptic_head.pixel_decoder.lateral_convs.0.gn.bias, panoptic_head.pixel_decoder.output_convs.0.conv.weight, panoptic_head.pixel_decoder.output_convs.0.gn.weight, panoptic_head.pixel_decoder.output_convs.0.gn.bias, panoptic_head.pixel_decoder.mask_feature.weight, panoptic_head.pixel_decoder.mask_feature.bias, panoptic_head.transformer_decoder.layers.0.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.0.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.0.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.0.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.0.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.0.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.0.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.0.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.0.norms.0.weight, panoptic_head.transformer_decoder.layers.0.norms.0.bias, panoptic_head.transformer_decoder.layers.0.norms.1.weight, panoptic_head.transformer_decoder.layers.0.norms.1.bias, panoptic_head.transformer_decoder.layers.0.norms.2.weight, panoptic_head.transformer_decoder.layers.0.norms.2.bias, panoptic_head.transformer_decoder.layers.1.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.1.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.1.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.1.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.1.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.1.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.1.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.1.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.1.norms.0.weight, panoptic_head.transformer_decoder.layers.1.norms.0.bias, panoptic_head.transformer_decoder.layers.1.norms.1.weight, panoptic_head.transformer_decoder.layers.1.norms.1.bias, panoptic_head.transformer_decoder.layers.1.norms.2.weight, panoptic_head.transformer_decoder.layers.1.norms.2.bias, panoptic_head.transformer_decoder.layers.2.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.2.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.2.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.2.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.2.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.2.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.2.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.2.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.2.norms.0.weight, panoptic_head.transformer_decoder.layers.2.norms.0.bias, panoptic_head.transformer_decoder.layers.2.norms.1.weight, panoptic_head.transformer_decoder.layers.2.norms.1.bias, panoptic_head.transformer_decoder.layers.2.norms.2.weight, panoptic_head.transformer_decoder.layers.2.norms.2.bias, panoptic_head.transformer_decoder.layers.3.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.3.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.3.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.3.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.3.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.3.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.3.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.3.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.3.norms.0.weight, panoptic_head.transformer_decoder.layers.3.norms.0.bias, panoptic_head.transformer_decoder.layers.3.norms.1.weight, panoptic_head.transformer_decoder.layers.3.norms.1.bias, panoptic_head.transformer_decoder.layers.3.norms.2.weight, panoptic_head.transformer_decoder.layers.3.norms.2.bias, panoptic_head.transformer_decoder.layers.4.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.4.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.4.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.4.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.4.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.4.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.4.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.4.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.4.norms.0.weight, panoptic_head.transformer_decoder.layers.4.norms.0.bias, panoptic_head.transformer_decoder.layers.4.norms.1.weight, panoptic_head.transformer_decoder.layers.4.norms.1.bias, panoptic_head.transformer_decoder.layers.4.norms.2.weight, panoptic_head.transformer_decoder.layers.4.norms.2.bias, panoptic_head.transformer_decoder.layers.5.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.5.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.5.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.5.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.5.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.5.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.5.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.5.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.5.norms.0.weight, panoptic_head.transformer_decoder.layers.5.norms.0.bias, panoptic_head.transformer_decoder.layers.5.norms.1.weight, panoptic_head.transformer_decoder.layers.5.norms.1.bias, panoptic_head.transformer_decoder.layers.5.norms.2.weight, panoptic_head.transformer_decoder.layers.5.norms.2.bias, panoptic_head.transformer_decoder.layers.6.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.6.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.6.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.6.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.6.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.6.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.6.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.6.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.6.norms.0.weight, panoptic_head.transformer_decoder.layers.6.norms.0.bias, panoptic_head.transformer_decoder.layers.6.norms.1.weight, panoptic_head.transformer_decoder.layers.6.norms.1.bias, panoptic_head.transformer_decoder.layers.6.norms.2.weight, panoptic_head.transformer_decoder.layers.6.norms.2.bias, panoptic_head.transformer_decoder.layers.7.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.7.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.7.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.7.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.7.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.7.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.7.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.7.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.7.norms.0.weight, panoptic_head.transformer_decoder.layers.7.norms.0.bias, panoptic_head.transformer_decoder.layers.7.norms.1.weight, panoptic_head.transformer_decoder.layers.7.norms.1.bias, panoptic_head.transformer_decoder.layers.7.norms.2.weight, panoptic_head.transformer_decoder.layers.7.norms.2.bias, panoptic_head.transformer_decoder.layers.8.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.8.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.8.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.8.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.8.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.8.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.8.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.8.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.8.norms.0.weight, panoptic_head.transformer_decoder.layers.8.norms.0.bias, panoptic_head.transformer_decoder.layers.8.norms.1.weight, panoptic_head.transformer_decoder.layers.8.norms.1.bias, panoptic_head.transformer_decoder.layers.8.norms.2.weight, panoptic_head.transformer_decoder.layers.8.norms.2.bias, panoptic_head.transformer_decoder.post_norm.weight, panoptic_head.transformer_decoder.post_norm.bias, panoptic_head.query_embed.weight, panoptic_head.query_feat.weight, panoptic_head.level_embed.weight, panoptic_head.cls_embed.weight, panoptic_head.cls_embed.bias, panoptic_head.mask_embed.0.weight, panoptic_head.mask_embed.0.bias, panoptic_head.mask_embed.2.weight, panoptic_head.mask_embed.2.bias, panoptic_head.mask_embed.4.weight, panoptic_head.mask_embed.4.bias

11/25 16:53:32 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future. 
11/25 16:53:32 - mmengine - INFO - Export PyTorch model to ONNX: mmdeploy_models/mmdet/ort/end2end.onnx.
11/25 16:53:32 - mmengine - WARNING - Can not find torch.nn.functional.scaled_dot_product_attention, function rewrite will not be applied
11/25 16:53:32 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
/root/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py:2359: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
/root/mmdetection/mmdet/models/layers/positional_encoding.py:84: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
/root/miniconda3/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:180: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  factor = feat.new_tensor([[w, h]]) * self.strides[level_idx]
/root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:203: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  spatial_shapes = torch.as_tensor(
/root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:335: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value
/root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:351: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if reference_points.shape[-1] == 2:
/root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:136: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes],
/root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:140: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  for level, (H_, W_) in enumerate(value_spatial_shapes):
/root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:226: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  num_queries_per_level = [e[0] * e[1] for e in spatial_shapes]
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
Process Process-2:
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx
    export(
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/onnx/export.py", line 138, in export
    torch.onnx.export(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/__init__.py", line 316, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 107, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 724, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer
    graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 532, in _model_to_graph
    _set_input_and_output_names(graph, input_names, output_names)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 806, in _set_input_and_output_names
    set_names(list(graph.outputs()), output_names, "output")
  File "/root/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 799, in set_names
    raise RuntimeError(
RuntimeError: number of output names provided (3) exceeded number of outputs (2)
11/25 16:54:42 - mmengine - ERROR - /root/autodl-tmp/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.pytorch2onnx.torch2onnx` with Call id: 0 failed. exit.
RunningLeon commented 7 months ago

hi, mask2former is special and you should use this deploy config https://github.com/open-mmlab/mmdeploy/blob/8b1958640408397d0cf98f202defa58878baf05b/tests/regression/mmdet.yml#L411

oracle0101 commented 7 months ago

嗨,mask2former 很特别,您应该使用此部署配置

https://github.com/open-mmlab/mmdeploy/blob/8b1958640408397d0cf98f202defa58878baf05b/tests/regression/mmdet.yml#L411

First of all, thank you very much for your reply to my question. I took your suggestion to modify the configuration and ran it with the following command:

python tools/deploy.py \ configs/mmdet/panoptic-seg/panoptic-seg_maskformer_onnxruntime_dynamic.py \ /root/mmdetection/configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py \ /root/autodl-tmp/model_0019999.pth \ /root/autodl-tmp/code/mmdeploy/demo/resources/det.jpg \ --work-dir mmdeploy_models/mmdet/ort \ --device cuda:0 \ --show

But there's a new bug, and I don't know how to fix it 11/30 16:04:37 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess 11/30 16:04:39 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 11/30 16:04:39 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. Loads checkpoint by local backend from path: /root/autodl-tmp/model_0019999.pth The model and loaded state dict do not match exactly

unexpected key in source state_dict: model, trainer, iteration

missing keys in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, panoptic_head.pixel_decoder.input_convs.0.conv.weight, panoptic_head.pixel_decoder.input_convs.0.conv.bias, panoptic_head.pixel_decoder.input_convs.0.gn.weight, panoptic_head.pixel_decoder.input_convs.0.gn.bias, panoptic_head.pixel_decoder.input_convs.1.conv.weight, panoptic_head.pixel_decoder.input_convs.1.conv.bias, panoptic_head.pixel_decoder.input_convs.1.gn.weight, panoptic_head.pixel_decoder.input_convs.1.gn.bias, panoptic_head.pixel_decoder.input_convs.2.conv.weight, panoptic_head.pixel_decoder.input_convs.2.conv.bias, panoptic_head.pixel_decoder.input_convs.2.gn.weight, panoptic_head.pixel_decoder.input_convs.2.gn.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.0.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.0.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.0.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.0.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.0.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.0.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.1.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.1.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.1.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.1.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.1.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.1.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.2.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.2.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.2.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.2.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.2.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.2.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.3.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.3.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.3.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.3.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.3.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.3.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.4.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.4.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.4.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.4.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.4.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.4.norms.1.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.sampling_offsets.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.sampling_offsets.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.attention_weights.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.attention_weights.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.value_proj.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.value_proj.bias, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.output_proj.weight, panoptic_head.pixel_decoder.encoder.layers.5.self_attn.output_proj.bias, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.0.0.weight, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.0.0.bias, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.1.weight, panoptic_head.pixel_decoder.encoder.layers.5.ffn.layers.1.bias, panoptic_head.pixel_decoder.encoder.layers.5.norms.0.weight, panoptic_head.pixel_decoder.encoder.layers.5.norms.0.bias, panoptic_head.pixel_decoder.encoder.layers.5.norms.1.weight, panoptic_head.pixel_decoder.encoder.layers.5.norms.1.bias, panoptic_head.pixel_decoder.level_encoding.weight, panoptic_head.pixel_decoder.lateral_convs.0.conv.weight, panoptic_head.pixel_decoder.lateral_convs.0.gn.weight, panoptic_head.pixel_decoder.lateral_convs.0.gn.bias, panoptic_head.pixel_decoder.output_convs.0.conv.weight, panoptic_head.pixel_decoder.output_convs.0.gn.weight, panoptic_head.pixel_decoder.output_convs.0.gn.bias, panoptic_head.pixel_decoder.mask_feature.weight, panoptic_head.pixel_decoder.mask_feature.bias, panoptic_head.transformer_decoder.layers.0.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.0.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.0.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.0.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.0.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.0.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.0.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.0.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.0.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.0.norms.0.weight, panoptic_head.transformer_decoder.layers.0.norms.0.bias, panoptic_head.transformer_decoder.layers.0.norms.1.weight, panoptic_head.transformer_decoder.layers.0.norms.1.bias, panoptic_head.transformer_decoder.layers.0.norms.2.weight, panoptic_head.transformer_decoder.layers.0.norms.2.bias, panoptic_head.transformer_decoder.layers.1.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.1.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.1.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.1.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.1.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.1.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.1.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.1.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.1.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.1.norms.0.weight, panoptic_head.transformer_decoder.layers.1.norms.0.bias, panoptic_head.transformer_decoder.layers.1.norms.1.weight, panoptic_head.transformer_decoder.layers.1.norms.1.bias, panoptic_head.transformer_decoder.layers.1.norms.2.weight, panoptic_head.transformer_decoder.layers.1.norms.2.bias, panoptic_head.transformer_decoder.layers.2.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.2.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.2.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.2.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.2.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.2.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.2.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.2.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.2.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.2.norms.0.weight, panoptic_head.transformer_decoder.layers.2.norms.0.bias, panoptic_head.transformer_decoder.layers.2.norms.1.weight, panoptic_head.transformer_decoder.layers.2.norms.1.bias, panoptic_head.transformer_decoder.layers.2.norms.2.weight, panoptic_head.transformer_decoder.layers.2.norms.2.bias, panoptic_head.transformer_decoder.layers.3.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.3.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.3.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.3.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.3.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.3.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.3.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.3.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.3.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.3.norms.0.weight, panoptic_head.transformer_decoder.layers.3.norms.0.bias, panoptic_head.transformer_decoder.layers.3.norms.1.weight, panoptic_head.transformer_decoder.layers.3.norms.1.bias, panoptic_head.transformer_decoder.layers.3.norms.2.weight, panoptic_head.transformer_decoder.layers.3.norms.2.bias, panoptic_head.transformer_decoder.layers.4.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.4.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.4.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.4.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.4.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.4.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.4.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.4.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.4.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.4.norms.0.weight, panoptic_head.transformer_decoder.layers.4.norms.0.bias, panoptic_head.transformer_decoder.layers.4.norms.1.weight, panoptic_head.transformer_decoder.layers.4.norms.1.bias, panoptic_head.transformer_decoder.layers.4.norms.2.weight, panoptic_head.transformer_decoder.layers.4.norms.2.bias, panoptic_head.transformer_decoder.layers.5.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.5.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.5.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.5.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.5.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.5.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.5.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.5.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.5.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.5.norms.0.weight, panoptic_head.transformer_decoder.layers.5.norms.0.bias, panoptic_head.transformer_decoder.layers.5.norms.1.weight, panoptic_head.transformer_decoder.layers.5.norms.1.bias, panoptic_head.transformer_decoder.layers.5.norms.2.weight, panoptic_head.transformer_decoder.layers.5.norms.2.bias, panoptic_head.transformer_decoder.layers.6.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.6.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.6.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.6.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.6.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.6.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.6.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.6.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.6.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.6.norms.0.weight, panoptic_head.transformer_decoder.layers.6.norms.0.bias, panoptic_head.transformer_decoder.layers.6.norms.1.weight, panoptic_head.transformer_decoder.layers.6.norms.1.bias, panoptic_head.transformer_decoder.layers.6.norms.2.weight, panoptic_head.transformer_decoder.layers.6.norms.2.bias, panoptic_head.transformer_decoder.layers.7.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.7.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.7.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.7.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.7.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.7.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.7.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.7.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.7.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.7.norms.0.weight, panoptic_head.transformer_decoder.layers.7.norms.0.bias, panoptic_head.transformer_decoder.layers.7.norms.1.weight, panoptic_head.transformer_decoder.layers.7.norms.1.bias, panoptic_head.transformer_decoder.layers.7.norms.2.weight, panoptic_head.transformer_decoder.layers.7.norms.2.bias, panoptic_head.transformer_decoder.layers.8.self_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.8.self_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.8.self_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.8.self_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.in_proj_weight, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.in_proj_bias, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.out_proj.weight, panoptic_head.transformer_decoder.layers.8.cross_attn.attn.out_proj.bias, panoptic_head.transformer_decoder.layers.8.ffn.layers.0.0.weight, panoptic_head.transformer_decoder.layers.8.ffn.layers.0.0.bias, panoptic_head.transformer_decoder.layers.8.ffn.layers.1.weight, panoptic_head.transformer_decoder.layers.8.ffn.layers.1.bias, panoptic_head.transformer_decoder.layers.8.norms.0.weight, panoptic_head.transformer_decoder.layers.8.norms.0.bias, panoptic_head.transformer_decoder.layers.8.norms.1.weight, panoptic_head.transformer_decoder.layers.8.norms.1.bias, panoptic_head.transformer_decoder.layers.8.norms.2.weight, panoptic_head.transformer_decoder.layers.8.norms.2.bias, panoptic_head.transformer_decoder.post_norm.weight, panoptic_head.transformer_decoder.post_norm.bias, panoptic_head.query_embed.weight, panoptic_head.query_feat.weight, panoptic_head.level_embed.weight, panoptic_head.cls_embed.weight, panoptic_head.cls_embed.bias, panoptic_head.mask_embed.0.weight, panoptic_head.mask_embed.0.bias, panoptic_head.mask_embed.2.weight, panoptic_head.mask_embed.2.bias, panoptic_head.mask_embed.4.weight, panoptic_head.mask_embed.4.bias

11/30 16:04:45 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future. 11/30 16:04:45 - mmengine - INFO - Export PyTorch model to ONNX: mmdeploy_models/mmdet/ort/end2end.onnx. 11/30 16:04:45 - mmengine - WARNING - Can not find torch.nn.functional.scaled_dot_product_attention, function rewrite will not be applied 11/30 16:04:45 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied /root/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py:2359: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:])) /root/mmdetection/mmdet/models/layers/positional_encoding.py:84: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). dim_t = self.temperature(2 * (dim_t // 2) / self.num_feats) /root/miniconda3/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, *kwargs) # type: ignore[attr-defined] /root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:180: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! factor = feat.new_tensor([[w, h]]) self.strides[level_idx] /root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:203: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. spatial_shapes = torch.as_tensor( /root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:335: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert (spatial_shapes[:, 0] spatial_shapes[:, 1]).sum() == num_value /root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:351: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if reference_points.shape[-1] == 2: /root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:136: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results). valuelist = value.split([H W for H, W_ in value_spatial_shapes], /root/miniconda3/lib/python3.8/site-packages/mmcv/ops/multi_scale_deformattn.py:140: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results). for level, (H, W_) in enumerate(value_spatial_shapes): /root/mmdetection/mmdet/models/layers/msdeformattn_pixel_decoder.py:226: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results). num_queries_per_level = [e[0] * e[1] for e in spatial_shapes] WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. 11/30 16:06:20 - mmengine - INFO - Execute onnx optimize passes. 11/30 16:06:22 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx 11/30 16:06:23 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process 11/30 16:06:23 - mmengine - INFO - Finish pipeline mmdeploy.apis.utils.utils.to_backend 11/30 16:06:23 - mmengine - INFO - visualize onnxruntime model start. 11/30 16:06:28 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 11/30 16:06:28 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 11/30 16:06:28 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "backend_detectors" registry tree. As a workaround, the current "backend_detectors" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized. 11/30 16:06:28 - mmengine - WARNING - The library of onnxruntime custom ops doesnot exist: /root/miniconda3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:53: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider' warnings.warn("Specified provider '{}' is not in available provider names." 2023-11-30:16:06:29 - root - ERROR - [ONNXRuntimeError] : 1 : FAIL : Load model from mmdeploy_models/mmdet/ort/end2end.onnx failed:Fatal error: grid_sampler is not a registered function/op Traceback (most recent call last): File "/root/autodl-tmp/code/mmdeploy/mmdeploy/utils/utils.py", line 41, in target_wrapper result = target(*args, kwargs) File "/root/autodl-tmp/code/mmdeploy/mmdeploy/apis/visualize.py", line 65, in visualize_model model = task_processor.build_backend_model( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection.py", line 159, in build_backend_model model = build_object_detection_model( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 1111, in build_object_detection_model backend_detector = BACKEND_MODEL.build( File "/root/miniconda3/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build return self.build_func(cfg, *args, kwargs, registry=self) File "/root/miniconda3/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg obj = obj_cls(args) # type: ignore File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 340, in init super(PanOpticEnd2EndModel, self).init( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 56, in init__ self._init_wrapper( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 70, in _init_wrapper self.wrapper = BaseBackendModel._build_wrapper( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/codebase/base/backend_model.py", line 65, in _build_wrapper return backend_mgr.build_wrapper(backend_files, device, input_names, File "/root/autodl-tmp/code/mmdeploy/mmdeploy/backend/onnxruntime/backend_manager.py", line 35, in build_wrapper return ORTWrapper( File "/root/autodl-tmp/code/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py", line 63, in init sess = ort.InferenceSession( File "/root/miniconda3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/root/miniconda3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 310, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mmdeploy_models/mmdet/ort/end2end.onnx failed:Fatal error: grid_sampler is not a registered function/op 11/30 16:06:29 - mmengine - ERROR - tools/deploy.py - create_process - 82 - visualize onnxruntime model failed.

oracle0101 commented 7 months ago

I want to convert the instance split weights for mask2former training to onnx format and wonder if the configuration using panorama splits will affect this

github-actions[bot] commented 7 months ago

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

github-actions[bot] commented 6 months ago

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.

chuzhixing commented 6 months ago

I encountered the same problem. In the output log of mmdeploy, pth to onnx was successful, but there was a visualization error. The error it reported is as follows.

error log: "onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from mask2former_swin-s-p4-w7-224_8xb2-lsj-50e_coco_dy/end2end.onnx failed:Fatal error: mmdeploy:grid_sampler(-1) is not a registered function/op"

According to [1], "grid_sampler is a custom op for onnxruntime and you have to load the built custom lib mmdeploy/lib/libmmdeploy_onnxruntime_ops.so in your onnxruntime inference code"。
So I referred to [2] and [3] to compile mmdeploy to obtain libmmdeploy_onnxruntime_ops.so, and then used tools/deploy.py to export onnx. The steps of visualizing the inference results did not report any errors, and the inference result image can be seen in the output directory.

When compiling mmdeploy to obtain libmmdeploy_onnxruntime_ops.so, I can test whether the exported ONNX can be loaded properly through the following script.

import onnxruntime as ort
path = "libmmdeploy_onnxruntime_ops.so"
model_path = "mask2former_swin-s-p4-w7-224_8xb2-lsj-50e_coco_dy_v03/end2end.onnx"
session_options = ort.SessionOptions()
session_options.register_custom_ops_library(path)
ort_session = ort.InferenceSession(model_path, session_options)

Reference:
[1] https://github.com/open-mmlab/mmdeploy/issues/2377 ([Potential Bug] FAIL : Load model from end2end.onnx failed:Fatal error: mmdeploy:grid_sampler(-1) is not a registered function/op · Issue #2377 · open-mmlab/mmdeploy)
[2] https://mmdeploy.readthedocs.io/zh-cn/latest/01-how-to-build/build_from_source.html (源码手动安装 — mmdeploy 1.3.1 文档)
[3] https://mmdeploy.readthedocs.io/zh-cn/latest/01-how-to-build/linux-x86_64.html (Linux-x86_64 下构建方式 — mmdeploy 1.3.1 文档)

Here is my export output log.

(openmmlab_3_p38) root@xxxx:/home/mmdeploy_1.3.1# python tools/deploy.py \
>     configs/mmdet/panoptic-seg/panoptic-seg_maskformer_onnxruntime_dynamic.py \
>     /home/mmdetection_3.2.0/configs/mask2former/mask2former_swin-s-p4-w7-224_8xb2-lsj-50e_coco.py \
>     /home/mmdetection_3.2.0/checkpoints/mask2former_swin-t-p4-w7-224_8xb2-lsj-50e_coco_20220508_091649-01b0f990.pth \
>     /home/mmdetection_3.2.0/demo/demo.jpg \
>     --work-dir mask2former_swin-s-p4-w7-224_8xb2-lsj-50e_coco_dy_v04 \
>     --device cpu \
>     --log-level INFO \
>     --show \
>     --dump-info
01/04 23:26:50 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:26:50 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:26:52 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
01/04 23:26:53 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:26:53 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: /home/mmdetection_3.2.0/checkpoints/mask2former_swin-t-p4-w7-224_8xb2-lsj-50e_coco_20220508_091649-01b0f990.pth
The model and loaded state dict do not match exactly

missing keys in source state_dict: backbone.stages.2.blocks.6.norm1.weight, backbone.stages.2.blocks.6.norm1.bias, backbone.stages.2.blocks.6.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.6.attn.w_msa.relative_position_index, backbone.stages.2.blocks.6.attn.w_msa.qkv.weight, backbone.stages.2.blocks.6.attn.w_msa.qkv.bias, backbone.stages.2.blocks.6.attn.w_msa.proj.weight, backbone.stages.2.blocks.6.attn.w_msa.proj.bias, backbone.stages.2.blocks.6.norm2.weight, backbone.stages.2.blocks.6.norm2.bias, backbone.stages.2.blocks.6.ffn.layers.0.0.weight, backbone.stages.2.blocks.6.ffn.layers.0.0.bias, backbone.stages.2.blocks.6.ffn.layers.1.weight, backbone.stages.2.blocks.6.ffn.layers.1.bias, backbone.stages.2.blocks.7.norm1.weight, backbone.stages.2.blocks.7.norm1.bias, backbone.stages.2.blocks.7.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.7.attn.w_msa.relative_position_index, backbone.stages.2.blocks.7.attn.w_msa.qkv.weight, backbone.stages.2.blocks.7.attn.w_msa.qkv.bias, backbone.stages.2.blocks.7.attn.w_msa.proj.weight, backbone.stages.2.blocks.7.attn.w_msa.proj.bias, backbone.stages.2.blocks.7.norm2.weight, backbone.stages.2.blocks.7.norm2.bias, backbone.stages.2.blocks.7.ffn.layers.0.0.weight, backbone.stages.2.blocks.7.ffn.layers.0.0.bias, backbone.stages.2.blocks.7.ffn.layers.1.weight, backbone.stages.2.blocks.7.ffn.layers.1.bias, backbone.stages.2.blocks.8.norm1.weight, backbone.stages.2.blocks.8.norm1.bias, backbone.stages.2.blocks.8.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.8.attn.w_msa.relative_position_index, backbone.stages.2.blocks.8.attn.w_msa.qkv.weight, backbone.stages.2.blocks.8.attn.w_msa.qkv.bias, backbone.stages.2.blocks.8.attn.w_msa.proj.weight, backbone.stages.2.blocks.8.attn.w_msa.proj.bias, backbone.stages.2.blocks.8.norm2.weight, backbone.stages.2.blocks.8.norm2.bias, backbone.stages.2.blocks.8.ffn.layers.0.0.weight, backbone.stages.2.blocks.8.ffn.layers.0.0.bias, backbone.stages.2.blocks.8.ffn.layers.1.weight, backbone.stages.2.blocks.8.ffn.layers.1.bias, backbone.stages.2.blocks.9.norm1.weight, backbone.stages.2.blocks.9.norm1.bias, backbone.stages.2.blocks.9.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.9.attn.w_msa.relative_position_index, backbone.stages.2.blocks.9.attn.w_msa.qkv.weight, backbone.stages.2.blocks.9.attn.w_msa.qkv.bias, backbone.stages.2.blocks.9.attn.w_msa.proj.weight, backbone.stages.2.blocks.9.attn.w_msa.proj.bias, backbone.stages.2.blocks.9.norm2.weight, backbone.stages.2.blocks.9.norm2.bias, backbone.stages.2.blocks.9.ffn.layers.0.0.weight, backbone.stages.2.blocks.9.ffn.layers.0.0.bias, backbone.stages.2.blocks.9.ffn.layers.1.weight, backbone.stages.2.blocks.9.ffn.layers.1.bias, backbone.stages.2.blocks.10.norm1.weight, backbone.stages.2.blocks.10.norm1.bias, backbone.stages.2.blocks.10.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.10.attn.w_msa.relative_position_index, backbone.stages.2.blocks.10.attn.w_msa.qkv.weight, backbone.stages.2.blocks.10.attn.w_msa.qkv.bias, backbone.stages.2.blocks.10.attn.w_msa.proj.weight, backbone.stages.2.blocks.10.attn.w_msa.proj.bias, backbone.stages.2.blocks.10.norm2.weight, backbone.stages.2.blocks.10.norm2.bias, backbone.stages.2.blocks.10.ffn.layers.0.0.weight, backbone.stages.2.blocks.10.ffn.layers.0.0.bias, backbone.stages.2.blocks.10.ffn.layers.1.weight, backbone.stages.2.blocks.10.ffn.layers.1.bias, backbone.stages.2.blocks.11.norm1.weight, backbone.stages.2.blocks.11.norm1.bias, backbone.stages.2.blocks.11.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.11.attn.w_msa.relative_position_index, backbone.stages.2.blocks.11.attn.w_msa.qkv.weight, backbone.stages.2.blocks.11.attn.w_msa.qkv.bias, backbone.stages.2.blocks.11.attn.w_msa.proj.weight, backbone.stages.2.blocks.11.attn.w_msa.proj.bias, backbone.stages.2.blocks.11.norm2.weight, backbone.stages.2.blocks.11.norm2.bias, backbone.stages.2.blocks.11.ffn.layers.0.0.weight, backbone.stages.2.blocks.11.ffn.layers.0.0.bias, backbone.stages.2.blocks.11.ffn.layers.1.weight, backbone.stages.2.blocks.11.ffn.layers.1.bias, backbone.stages.2.blocks.12.norm1.weight, backbone.stages.2.blocks.12.norm1.bias, backbone.stages.2.blocks.12.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.12.attn.w_msa.relative_position_index, backbone.stages.2.blocks.12.attn.w_msa.qkv.weight, backbone.stages.2.blocks.12.attn.w_msa.qkv.bias, backbone.stages.2.blocks.12.attn.w_msa.proj.weight, backbone.stages.2.blocks.12.attn.w_msa.proj.bias, backbone.stages.2.blocks.12.norm2.weight, backbone.stages.2.blocks.12.norm2.bias, backbone.stages.2.blocks.12.ffn.layers.0.0.weight, backbone.stages.2.blocks.12.ffn.layers.0.0.bias, backbone.stages.2.blocks.12.ffn.layers.1.weight, backbone.stages.2.blocks.12.ffn.layers.1.bias, backbone.stages.2.blocks.13.norm1.weight, backbone.stages.2.blocks.13.norm1.bias, backbone.stages.2.blocks.13.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.13.attn.w_msa.relative_position_index, backbone.stages.2.blocks.13.attn.w_msa.qkv.weight, backbone.stages.2.blocks.13.attn.w_msa.qkv.bias, backbone.stages.2.blocks.13.attn.w_msa.proj.weight, backbone.stages.2.blocks.13.attn.w_msa.proj.bias, backbone.stages.2.blocks.13.norm2.weight, backbone.stages.2.blocks.13.norm2.bias, backbone.stages.2.blocks.13.ffn.layers.0.0.weight, backbone.stages.2.blocks.13.ffn.layers.0.0.bias, backbone.stages.2.blocks.13.ffn.layers.1.weight, backbone.stages.2.blocks.13.ffn.layers.1.bias, backbone.stages.2.blocks.14.norm1.weight, backbone.stages.2.blocks.14.norm1.bias, backbone.stages.2.blocks.14.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.14.attn.w_msa.relative_position_index, backbone.stages.2.blocks.14.attn.w_msa.qkv.weight, backbone.stages.2.blocks.14.attn.w_msa.qkv.bias, backbone.stages.2.blocks.14.attn.w_msa.proj.weight, backbone.stages.2.blocks.14.attn.w_msa.proj.bias, backbone.stages.2.blocks.14.norm2.weight, backbone.stages.2.blocks.14.norm2.bias, backbone.stages.2.blocks.14.ffn.layers.0.0.weight, backbone.stages.2.blocks.14.ffn.layers.0.0.bias, backbone.stages.2.blocks.14.ffn.layers.1.weight, backbone.stages.2.blocks.14.ffn.layers.1.bias, backbone.stages.2.blocks.15.norm1.weight, backbone.stages.2.blocks.15.norm1.bias, backbone.stages.2.blocks.15.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.15.attn.w_msa.relative_position_index, backbone.stages.2.blocks.15.attn.w_msa.qkv.weight, backbone.stages.2.blocks.15.attn.w_msa.qkv.bias, backbone.stages.2.blocks.15.attn.w_msa.proj.weight, backbone.stages.2.blocks.15.attn.w_msa.proj.bias, backbone.stages.2.blocks.15.norm2.weight, backbone.stages.2.blocks.15.norm2.bias, backbone.stages.2.blocks.15.ffn.layers.0.0.weight, backbone.stages.2.blocks.15.ffn.layers.0.0.bias, backbone.stages.2.blocks.15.ffn.layers.1.weight, backbone.stages.2.blocks.15.ffn.layers.1.bias, backbone.stages.2.blocks.16.norm1.weight, backbone.stages.2.blocks.16.norm1.bias, backbone.stages.2.blocks.16.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.16.attn.w_msa.relative_position_index, backbone.stages.2.blocks.16.attn.w_msa.qkv.weight, backbone.stages.2.blocks.16.attn.w_msa.qkv.bias, backbone.stages.2.blocks.16.attn.w_msa.proj.weight, backbone.stages.2.blocks.16.attn.w_msa.proj.bias, backbone.stages.2.blocks.16.norm2.weight, backbone.stages.2.blocks.16.norm2.bias, backbone.stages.2.blocks.16.ffn.layers.0.0.weight, backbone.stages.2.blocks.16.ffn.layers.0.0.bias, backbone.stages.2.blocks.16.ffn.layers.1.weight, backbone.stages.2.blocks.16.ffn.layers.1.bias, backbone.stages.2.blocks.17.norm1.weight, backbone.stages.2.blocks.17.norm1.bias, backbone.stages.2.blocks.17.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.17.attn.w_msa.relative_position_index, backbone.stages.2.blocks.17.attn.w_msa.qkv.weight, backbone.stages.2.blocks.17.attn.w_msa.qkv.bias, backbone.stages.2.blocks.17.attn.w_msa.proj.weight, backbone.stages.2.blocks.17.attn.w_msa.proj.bias, backbone.stages.2.blocks.17.norm2.weight, backbone.stages.2.blocks.17.norm2.bias, backbone.stages.2.blocks.17.ffn.layers.0.0.weight, backbone.stages.2.blocks.17.ffn.layers.0.0.bias, backbone.stages.2.blocks.17.ffn.layers.1.weight, backbone.stages.2.blocks.17.ffn.layers.1.bias

01/04 23:26:55 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future. 
01/04 23:26:55 - mmengine - INFO - Export PyTorch model to ONNX: mask2former_swin-s-p4-w7-224_8xb2-lsj-50e_coco_dy_v04/end2end.onnx.
01/04 23:26:55 - mmengine - WARNING - Can not find torch.nn.functional.scaled_dot_product_attention, function rewrite will not be applied
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:167: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  output_h = math.ceil(input_h / stride_h)
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:168: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  output_w = math.ceil(input_w / stride_w)
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:169: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  pad_h = max((output_h - 1) * stride_h +
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:171: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  pad_w = max((output_w - 1) * stride_w +
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:177: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if pad_h > 0 or pad_w > 0:
/home/mmdeploy_1.3.1/mmdeploy/codebase/mmdet/models/backbones.py:189: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert L == H * W, 'input feature has wrong size'
/home/mmdetection_3.2.0/mmdet/models/backbones/swin.py:267: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  B = int(windows.shape[0] / (H * W / window_size / window_size))
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/cnn/bricks/wrappers.py:167: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)):
/home/mmdetection_3.2.0/mmdet/models/layers/transformer/utils.py:414: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert L == H * W, 'input feature has wrong size'
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:335: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:351: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if reference_points.shape[-1] == 2:
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:136: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes],
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/ops/multi_scale_deform_attn.py:140: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
  for level, (H_, W_) in enumerate(value_spatial_shapes):
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/mmcv/cnn/bricks/wrappers.py:44: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/_internal/jit_utils.py:258: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/utils.py:687: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of mmdeploy::grid_sampler type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_graph_shape_type_inference(
/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
  _C._jit_pass_onnx_graph_shape_type_inference(
01/04 23:28:34 - mmengine - INFO - Execute onnx optimize passes.
01/04 23:28:34 - mmengine - WARNING - Can not optimize model, please build torchscipt extension.
More details: https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/experimental/onnx_optimizer.md
01/04 23:28:37 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
01/04 23:28:39 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
01/04 23:28:39 - mmengine - INFO - Finish pipeline mmdeploy.apis.utils.utils.to_backend
01/04 23:28:39 - mmengine - INFO - visualize onnxruntime model start.
01/04 23:28:42 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:28:42 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:28:42 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "backend_detectors" registry tree. As a workaround, the current "backend_detectors" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:28:42 - mmengine - INFO - Successfully loaded onnxruntime custom ops from /home/mmdeploy_1.3.1/mmdeploy/lib/libmmdeploy_onnxruntime_ops.so
01/04 23:28:56 - mmengine - WARNING - render and display result skipped for headless device, exception No module named 'tkinter'
01/04 23:28:57 - mmengine - INFO - visualize onnxruntime model success.
01/04 23:28:57 - mmengine - INFO - visualize pytorch model start.
01/04 23:29:00 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
01/04 23:29:00 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: /home/mmdetection_3.2.0/checkpoints/mask2former_swin-t-p4-w7-224_8xb2-lsj-50e_coco_20220508_091649-01b0f990.pth
The model and loaded state dict do not match exactly

missing keys in source state_dict: backbone.stages.2.blocks.6.norm1.weight, backbone.stages.2.blocks.6.norm1.bias, backbone.stages.2.blocks.6.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.6.attn.w_msa.relative_position_index, backbone.stages.2.blocks.6.attn.w_msa.qkv.weight, backbone.stages.2.blocks.6.attn.w_msa.qkv.bias, backbone.stages.2.blocks.6.attn.w_msa.proj.weight, backbone.stages.2.blocks.6.attn.w_msa.proj.bias, backbone.stages.2.blocks.6.norm2.weight, backbone.stages.2.blocks.6.norm2.bias, backbone.stages.2.blocks.6.ffn.layers.0.0.weight, backbone.stages.2.blocks.6.ffn.layers.0.0.bias, backbone.stages.2.blocks.6.ffn.layers.1.weight, backbone.stages.2.blocks.6.ffn.layers.1.bias, backbone.stages.2.blocks.7.norm1.weight, backbone.stages.2.blocks.7.norm1.bias, backbone.stages.2.blocks.7.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.7.attn.w_msa.relative_position_index, backbone.stages.2.blocks.7.attn.w_msa.qkv.weight, backbone.stages.2.blocks.7.attn.w_msa.qkv.bias, backbone.stages.2.blocks.7.attn.w_msa.proj.weight, backbone.stages.2.blocks.7.attn.w_msa.proj.bias, backbone.stages.2.blocks.7.norm2.weight, backbone.stages.2.blocks.7.norm2.bias, backbone.stages.2.blocks.7.ffn.layers.0.0.weight, backbone.stages.2.blocks.7.ffn.layers.0.0.bias, backbone.stages.2.blocks.7.ffn.layers.1.weight, backbone.stages.2.blocks.7.ffn.layers.1.bias, backbone.stages.2.blocks.8.norm1.weight, backbone.stages.2.blocks.8.norm1.bias, backbone.stages.2.blocks.8.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.8.attn.w_msa.relative_position_index, backbone.stages.2.blocks.8.attn.w_msa.qkv.weight, backbone.stages.2.blocks.8.attn.w_msa.qkv.bias, backbone.stages.2.blocks.8.attn.w_msa.proj.weight, backbone.stages.2.blocks.8.attn.w_msa.proj.bias, backbone.stages.2.blocks.8.norm2.weight, backbone.stages.2.blocks.8.norm2.bias, backbone.stages.2.blocks.8.ffn.layers.0.0.weight, backbone.stages.2.blocks.8.ffn.layers.0.0.bias, backbone.stages.2.blocks.8.ffn.layers.1.weight, backbone.stages.2.blocks.8.ffn.layers.1.bias, backbone.stages.2.blocks.9.norm1.weight, backbone.stages.2.blocks.9.norm1.bias, backbone.stages.2.blocks.9.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.9.attn.w_msa.relative_position_index, backbone.stages.2.blocks.9.attn.w_msa.qkv.weight, backbone.stages.2.blocks.9.attn.w_msa.qkv.bias, backbone.stages.2.blocks.9.attn.w_msa.proj.weight, backbone.stages.2.blocks.9.attn.w_msa.proj.bias, backbone.stages.2.blocks.9.norm2.weight, backbone.stages.2.blocks.9.norm2.bias, backbone.stages.2.blocks.9.ffn.layers.0.0.weight, backbone.stages.2.blocks.9.ffn.layers.0.0.bias, backbone.stages.2.blocks.9.ffn.layers.1.weight, backbone.stages.2.blocks.9.ffn.layers.1.bias, backbone.stages.2.blocks.10.norm1.weight, backbone.stages.2.blocks.10.norm1.bias, backbone.stages.2.blocks.10.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.10.attn.w_msa.relative_position_index, backbone.stages.2.blocks.10.attn.w_msa.qkv.weight, backbone.stages.2.blocks.10.attn.w_msa.qkv.bias, backbone.stages.2.blocks.10.attn.w_msa.proj.weight, backbone.stages.2.blocks.10.attn.w_msa.proj.bias, backbone.stages.2.blocks.10.norm2.weight, backbone.stages.2.blocks.10.norm2.bias, backbone.stages.2.blocks.10.ffn.layers.0.0.weight, backbone.stages.2.blocks.10.ffn.layers.0.0.bias, backbone.stages.2.blocks.10.ffn.layers.1.weight, backbone.stages.2.blocks.10.ffn.layers.1.bias, backbone.stages.2.blocks.11.norm1.weight, backbone.stages.2.blocks.11.norm1.bias, backbone.stages.2.blocks.11.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.11.attn.w_msa.relative_position_index, backbone.stages.2.blocks.11.attn.w_msa.qkv.weight, backbone.stages.2.blocks.11.attn.w_msa.qkv.bias, backbone.stages.2.blocks.11.attn.w_msa.proj.weight, backbone.stages.2.blocks.11.attn.w_msa.proj.bias, backbone.stages.2.blocks.11.norm2.weight, backbone.stages.2.blocks.11.norm2.bias, backbone.stages.2.blocks.11.ffn.layers.0.0.weight, backbone.stages.2.blocks.11.ffn.layers.0.0.bias, backbone.stages.2.blocks.11.ffn.layers.1.weight, backbone.stages.2.blocks.11.ffn.layers.1.bias, backbone.stages.2.blocks.12.norm1.weight, backbone.stages.2.blocks.12.norm1.bias, backbone.stages.2.blocks.12.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.12.attn.w_msa.relative_position_index, backbone.stages.2.blocks.12.attn.w_msa.qkv.weight, backbone.stages.2.blocks.12.attn.w_msa.qkv.bias, backbone.stages.2.blocks.12.attn.w_msa.proj.weight, backbone.stages.2.blocks.12.attn.w_msa.proj.bias, backbone.stages.2.blocks.12.norm2.weight, backbone.stages.2.blocks.12.norm2.bias, backbone.stages.2.blocks.12.ffn.layers.0.0.weight, backbone.stages.2.blocks.12.ffn.layers.0.0.bias, backbone.stages.2.blocks.12.ffn.layers.1.weight, backbone.stages.2.blocks.12.ffn.layers.1.bias, backbone.stages.2.blocks.13.norm1.weight, backbone.stages.2.blocks.13.norm1.bias, backbone.stages.2.blocks.13.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.13.attn.w_msa.relative_position_index, backbone.stages.2.blocks.13.attn.w_msa.qkv.weight, backbone.stages.2.blocks.13.attn.w_msa.qkv.bias, backbone.stages.2.blocks.13.attn.w_msa.proj.weight, backbone.stages.2.blocks.13.attn.w_msa.proj.bias, backbone.stages.2.blocks.13.norm2.weight, backbone.stages.2.blocks.13.norm2.bias, backbone.stages.2.blocks.13.ffn.layers.0.0.weight, backbone.stages.2.blocks.13.ffn.layers.0.0.bias, backbone.stages.2.blocks.13.ffn.layers.1.weight, backbone.stages.2.blocks.13.ffn.layers.1.bias, backbone.stages.2.blocks.14.norm1.weight, backbone.stages.2.blocks.14.norm1.bias, backbone.stages.2.blocks.14.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.14.attn.w_msa.relative_position_index, backbone.stages.2.blocks.14.attn.w_msa.qkv.weight, backbone.stages.2.blocks.14.attn.w_msa.qkv.bias, backbone.stages.2.blocks.14.attn.w_msa.proj.weight, backbone.stages.2.blocks.14.attn.w_msa.proj.bias, backbone.stages.2.blocks.14.norm2.weight, backbone.stages.2.blocks.14.norm2.bias, backbone.stages.2.blocks.14.ffn.layers.0.0.weight, backbone.stages.2.blocks.14.ffn.layers.0.0.bias, backbone.stages.2.blocks.14.ffn.layers.1.weight, backbone.stages.2.blocks.14.ffn.layers.1.bias, backbone.stages.2.blocks.15.norm1.weight, backbone.stages.2.blocks.15.norm1.bias, backbone.stages.2.blocks.15.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.15.attn.w_msa.relative_position_index, backbone.stages.2.blocks.15.attn.w_msa.qkv.weight, backbone.stages.2.blocks.15.attn.w_msa.qkv.bias, backbone.stages.2.blocks.15.attn.w_msa.proj.weight, backbone.stages.2.blocks.15.attn.w_msa.proj.bias, backbone.stages.2.blocks.15.norm2.weight, backbone.stages.2.blocks.15.norm2.bias, backbone.stages.2.blocks.15.ffn.layers.0.0.weight, backbone.stages.2.blocks.15.ffn.layers.0.0.bias, backbone.stages.2.blocks.15.ffn.layers.1.weight, backbone.stages.2.blocks.15.ffn.layers.1.bias, backbone.stages.2.blocks.16.norm1.weight, backbone.stages.2.blocks.16.norm1.bias, backbone.stages.2.blocks.16.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.16.attn.w_msa.relative_position_index, backbone.stages.2.blocks.16.attn.w_msa.qkv.weight, backbone.stages.2.blocks.16.attn.w_msa.qkv.bias, backbone.stages.2.blocks.16.attn.w_msa.proj.weight, backbone.stages.2.blocks.16.attn.w_msa.proj.bias, backbone.stages.2.blocks.16.norm2.weight, backbone.stages.2.blocks.16.norm2.bias, backbone.stages.2.blocks.16.ffn.layers.0.0.weight, backbone.stages.2.blocks.16.ffn.layers.0.0.bias, backbone.stages.2.blocks.16.ffn.layers.1.weight, backbone.stages.2.blocks.16.ffn.layers.1.bias, backbone.stages.2.blocks.17.norm1.weight, backbone.stages.2.blocks.17.norm1.bias, backbone.stages.2.blocks.17.attn.w_msa.relative_position_bias_table, backbone.stages.2.blocks.17.attn.w_msa.relative_position_index, backbone.stages.2.blocks.17.attn.w_msa.qkv.weight, backbone.stages.2.blocks.17.attn.w_msa.qkv.bias, backbone.stages.2.blocks.17.attn.w_msa.proj.weight, backbone.stages.2.blocks.17.attn.w_msa.proj.bias, backbone.stages.2.blocks.17.norm2.weight, backbone.stages.2.blocks.17.norm2.bias, backbone.stages.2.blocks.17.ffn.layers.0.0.weight, backbone.stages.2.blocks.17.ffn.layers.0.0.bias, backbone.stages.2.blocks.17.ffn.layers.1.weight, backbone.stages.2.blocks.17.ffn.layers.1.bias

/root/.virtualenvs/openmmlab_3_p38/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
01/04 23:29:09 - mmengine - WARNING - render and display result skipped for headless device, exception No module named 'tkinter'
01/04 23:29:10 - mmengine - INFO - visualize pytorch model success.
01/04 23:29:10 - mmengine - INFO - All process success.
shiyongde commented 4 months ago

looking 。。。