open-mmlab / mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
https://mmtracking.readthedocs.io/en/latest/
Apache License 2.0
3.56k stars 598 forks source link

Key Error detections recieved on running traktor #542

Closed sparshgarg23 closed 2 years ago

sparshgarg23 commented 2 years ago

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug A clear and concise description of what the bug is. Key error recieved "detections" when testing traktor on demo video. Reproduction

  1. What command or script did you run? first train detection model on MOT 20 using command
    !python tools/train.py /content/mmtracking/configs/det/faster-rcnn_r50_fpn_8e_mot20-half.py 

    Then once training is finished change path to checkpoint in tracktor_faster-rcnn_r50_fpn_8e_mot20-public-half.py then execute the following test script

    
    # run mot demo
    import mmcv
    import tempfile
    from mmtrack.apis import inference_mot, init_model
    mot_config = '/content/mmtracking/configs/mot/tracktor/tracktor_faster-rcnn_r50_fpn_8e_mot20-public-half.py'
    input_video = './demo/demo.mp4'
    imgs = mmcv.VideoReader(input_video)
    # build the model from a config file
    mot_model = init_model(mot_config, device='cuda:0')
    prog_bar = mmcv.ProgressBar(len(imgs))
    out_dir = tempfile.TemporaryDirectory()
    out_path = out_dir.name
    # test and show/save the images
    for i, img in enumerate(imgs):
    result = inference_mot(mot_model, img, frame_id=i)
    mot_model.show_result(
            img,
            result,
            show=False,
            wait_time=int(1000. / imgs.fps),
            out_file=f'{out_path}/{i:06d}.jpg')
    prog_bar.update()

output = './demo/mot.mp4' print(f'\n making the output video at {output} with a FPS of {imgs.fps}') mmcv.frames2video(out_path, output, fps=imgs.fps, fourcc='mp4v') out_dir.cleanup()

2. Did you make any modifications on the code or config? Did you understand what you have modified?
Yes, Trained MOT traktor on MOT20 
first trained faster rcnn on mot20 half for 8 epochs.Changes included modifying train.py and config files
4. What dataset did you use and what task did you run?
MOT20
**Environment**

1. Please run `python mmtrack/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
    - How you installed PyTorch [e.g., pip, conda, source]
    - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
Environment

/content/mmtracking/mmtrack/core/utils/misc.py:25: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting OMP_NUM_THREADS environment variable for each process ' /content/mmtracking/mmtrack/core/utils/misc.py:35: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. f'Setting MKL_NUM_THREADS environment variable for each process ' 2022-05-03 05:37:48,339 - mmtrack - INFO - Environment info:

sys.platform: linux Python: 3.7.13 (default, Apr 24 2022, 01:04:09) [GCC 7.5.0] CUDA available: True GPU 0: Tesla P100-PCIE-16GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.1, V11.1.105 GCC: x86_64-linux-gnu-gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.11.0+cu113 PyTorch compiling details: PyTorch built with:

TorchVision: 0.12.0+cu113 OpenCV: 4.1.2 MMCV: 1.5.0 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 11.1 MMTracking: 0.13.0+88f92dd

**Error traceback**
If applicable, paste the error trackback here.

2022-05-03 13:38:53,432 - mmtrack - INFO - initialize FasterRCNN with init_cfg {'type': 'Pretrained', 'checkpoint': '/content/mmtracking/work_dirs/faster-rcnn_r50_fpn_8e_mot20-half/epoch_8.pth'} 2022-05-03 13:38:53,434 - mmcv - INFO - load model from: /content/mmtracking/work_dirs/faster-rcnn_r50_fpn_8e_mot20-half/epoch_8.pth 2022-05-03 13:38:53,435 - mmcv - INFO - load checkpoint from local path: /content/mmtracking/work_dirs/faster-rcnn_r50_fpn_8e_mot20-half/epoch_8.pth 2022-05-03 13:38:53,785 - mmtrack - INFO - initialize BaseReID with init_cfg {'type': 'Pretrained', 'checkpoint': 'https://download.openmmlab.com/mmtracking/mot/reid/reid_r50_6e_mot20_20210803_212426-c83b1c01.pth'} 2022-05-03 13:38:53,789 - mmcv - INFO - load model from: https://download.openmmlab.com/mmtracking/mot/reid/reid_r50_6e_mot20_20210803_212426-c83b1c01.pth 2022-05-03 13:38:53,792 - mmcv - INFO - load checkpoint from http path: https://download.openmmlab.com/mmtracking/mot/reid/reid_r50_6e_mot20_20210803_212426-c83b1c01.pth Warning: The model doesn't have classes [ ] 0/8, elapsed: 0s, ETA:

KeyError Traceback (most recent call last) in () 13 # test and show/save the images 14 for i, img in enumerate(imgs): ---> 15 result = inference_mot(mot_model, img, frame_id=i) 16 mot_model.show_result( 17 img,

2 frames /content/mmtracking/mmtrack/datasets/pipelines/loading.py in call(self, results) 99 100 def call(self, results): --> 101 outs_det = results2outs(bbox_results=results['detections']) 102 bboxes = outs_det['bboxes'] 103 labels = outs_det['labels']

KeyError: 'detections'


**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
No I haven't figured out the resolution to this step.
 Any ideas why this is happening when I run test.py on traktor using my own detection model the results seem to be fine as shown below

Evaluating 1 tracker(s) on 4 sequence(s) for 1 class(es) on MotChallenge2DBox dataset using the following metrics: HOTA, Count

Evaluating track

1 eval_sequence(MOT20-01, track) 0.4318 sec 2 eval_sequence(MOT20-02, track) 4.3789 sec 3 eval_sequence(MOT20-03, track) 8.8665 sec 4 eval_sequence(MOT20-05, track) 14.7021 sec

All sequences for track finished in 28.38 seconds

HOTA: track-pedestrian HOTA DetA AssA DetRe DetPr AssRe AssPr LocA RHOTA HOTA(0) LocA(0) HOTALocA(0) MOT20-01 56.012 52.744 59.658 54.667 84.361 63.836 79.329 85.402 57.094 67.043 82.843 55.54
MOT20-02 42.266 48.732 36.937 50.817 82.331 40.644 72.46 84.362 43.265 51.07 81.173 41.455
MOT20-03 59.641 61.751 57.676 64.46 81.912 61.172 79.379 83.479 60.967 74.253 80.664 59.895
MOT20-05 52.799 57.758 48.393 59.857 83.721 50.865 80.449 84.564 53.815 63.809 82.116 52.397
COMBINED 53.767 57.57 50.341 59.863 82.933 53.341 79.157 84.184 54.889 65.682 81.51 53.538

Count: track-pedestrian Dets GT_Dets IDs GT_IDs
MOT20-01 7005 10810 72 70
MOT20-02 56695 91855 322 174
MOT20-03 152204 193410 599 517
MOT20-05 228115 319062 1079 657
COMBINED 444019 615137 2072 1418
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm HOTA MOT20-05 63.7% 76.4% 54.6% 71.2% 99.5% 657 266 290 101 1083 91965 738 1178 70.6% 0.174 254 533 49 0.527989 MOT20-02 51.0% 66.7% 41.3% 61.0% 98.5% 174 54 104 16 836 35840 361 499 59.7% 0.177 208 204 51 0.422656 MOT20-01 69.0% 87.7% 56.8% 64.3% 99.2% 70 30 28 12 56 3861 41 59 63.4% 0.164 35 23 17 0.560116 MOT20-03 75.8% 85.9% 67.7% 77.9% 98.9% 517 262 179 76 1701 42661 275 509 76.9% 0.188 128 175 28 0.596410 OVERALL 65.9% 78.6% 56.8% 71.7% 99.2% 1418 612 601 205 3676 174327 1415 2245 70.8% 0.179 625 935 145 0.537670 {'IDF1': 0.659, 'IDP': 0.786, 'IDR': 0.568, 'Rcll': 0.717, 'Prcn': 0.992, 'GT': 1418, 'MT': 612, 'PT': 601, 'ML': 205, 'FP': 3676, 'FN': 174327, 'IDs': 1415, 'FM': 2245, 'MOTA': 0.708, 'MOTP': 0.179, 'IDt': 625, 'IDa': 935, 'IDm': 145, 'HOTA': 0.538}

dyhBUPT commented 2 years ago

Maybe, you can try to run the script demo/demo_mot_vis.py first to verify your config file. And you can also refer to the example about detailed description of mot config.

sparshgarg23 commented 2 years ago

on running demo_mot_vis.py,was able to reproduce the result and obtain result similar to demo. However when I pass --show and --backend plt,I end up recieving error

Image size of 576000x324000 pixels is too large. It must be less than 2^16 in each direction.

Any ideas on how to fix this.

dyhBUPT commented 2 years ago

Glad to hear that you have solved the previous error.

If the problem of "Image size is too large" persists, please consider creating a new issue and following the error report to describe the problem.