open-mmlab / mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.
https://mmrazor.readthedocs.io/en/latest/
Apache License 2.0
1.47k stars 227 forks source link

[Bug] use mmrazor/main/tools/visualizations/feature_diff_visualization.py error: inference_mot() missing 2 required positional arguments: 'frame_id' and 'video_len' #624

Open zhanghay opened 9 months ago

zhanghay commented 9 months ago

Describe the bug

A clear and concise description of what the bug is.

使用mmrazor/main/tools/visualizations/feature_diff_visualization.py对mmdetection中faster-rcnn可视化时,提示需有frame_id,videolen。原因在于该文件line93为'inference'而在mmdetection的api中有'inference_detector'、'inference_mot',可能默认加载了'inference_mot',导致了对单张图片可视化时需要额外的frame_id,video_len参数

When using mmrazor/main/tools/visualizations/feature_visualization.py to visualize faster-rcnn in mmdetection, there is a prompt asking for frame_id and videolen. The reason for this is that line 93 of the file is 'inference', while mmdetection's API provides 'inference_detector' and 'inference_mot', possibly defaulting to 'inference_mot', which requires additional frame_id and video_len parameters when visualizing a single image.

The error:

Traceback (most recent call last):
  File "tools/visualizations/feature_diff_visualization.py", line 171, in <module>
    main(args)
  File "tools/visualizations/feature_diff_visualization.py", line 122, in main
    _ = inference_model(model1, args.img)
TypeError: inference_mot() missing 2 required positional arguments: 'frame_id' and 'video_len'

To Reproduce

The command you executed.

python tools/visualizations/feature_diff_visualization.py \\
tools/visualizations/demo.jpg \\
/home/z/code/mmrazor/mmdetection/mmdet/.mim/configs/faster_rcnn/faster-rcnn_r101_fpn_2x_coco.py \\
/home/z/code/mmrazor/mmdetection/mmdet/.mim/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \\
tools/visualizations/vis_configs/fpn_feature_diff_visualization.py \\
faster_rcnn_r101_fpn_2x_coco_bbox_mAP-0.398_20200504_210455-1d2dac9c.pth \\
 ./work_dirs/cwd_fpn_frcnn_r101_frcnn_r50_1x_coco/epoch_12.pth \\
 --out-file ./output/1.jpg \\ 
 --device cpu   \\
 --repo mmdet \\
 --channel-reduction pixel_wise_max \\
 --resize-shape 320 320

Post related information

  1. The output of pip list | grep "mmcv\|mmrazor\|^torch"

    mmcv                   2.0.1
    mmrazor                1.0.0        
    torch                  1.12.0+cu113
    torchvision            0.13.0+cu113
  2. Your config file if you modified it or created a new one.

  3. Your train log file if you meet the problem during training. no

  4. Other code you modified in the mmrazor folder. no, just ln -s mmdetetion ./mmrazor/mmdetection. (Soft-link the mmdetection library into mmrazor)

Additional context

Add any other context about the problem here.

[here]

XiaoSiJi-GCY commented 2 weeks ago

同样的问题,请问您解决了吗?

tori-hotaru commented 4 days ago

I also face this problem in my programme. Just like you said, it is because in mmdet api, there are two inference, 'inference_detector'、'inference_mot', the programme use wrong api inference_mot instead of inference_detector. So I directly change feature_visualization.py by change ` with recorder_manager:

test a single image

    result = inference_model(model, args.img)` into

with recorder_manager: from mmdet.apis import inference_detector result = inference_detector(model, args.img) It works. I hope this can help you. @XiaoSiJi-GCY @zhanghay

XiaoSiJi-GCY commented 3 days ago

I also face this problem in my programme. Just like you said, it is because in mmdet api, there are two inference, 'inference_detector'、'inference_mot', the programme use wrong api inference_mot instead of inference_detector. So I directly change feature_visualization.py by change with recorder_manager: # test a single image result = inference_model(model, args.img) into with recorder_manager: from mmdet.apis import inference_detector result = inference_detector(model, args.img) It works. I hope this can help you. @XiaoSiJi-GCY @zhanghay

Thank you very much, have been able to run feature_visualization.py properly, just following the solution you described.

祝科研顺利! @tori-hotaru