open-mmlab / mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
https://mmtracking.readthedocs.io/en/latest/
Apache License 2.0
3.56k stars 598 forks source link

Recall less than 0.001 by using ground truth as detection #595

Open NikoRohr opened 2 years ago

NikoRohr commented 2 years ago

Hi, I just want to run SORT with a custom Dataset, by using the ground truth data as public detection results. So I created an train_det.pkl file which is filled with the ground truth boxes without the track_id and with conf 1. When I run test.py with

python3 ../mmtracking/tools/test.py configs/penguin_sort_gt_public.py --work-dir exps/penguins --eval track --out exps/penguins/result.pkl --show-dir exps/penguins

no errors occure but the evaluation results are:

{"config": "configs/penguin_sort_gt_public.py", "mode": "test", "epoch": 1, "IDF1": 0.001, "IDP": 0.414, "IDR": 0.0, "Rcll": 0.001, "Prcn": 1.0, "GT": 23, "MT": 0, "PT": 0, "ML": 23, "FP": 0, "FN": 487570, "IDs": 34, "FM": 34, "MOTA": 0.001, "MOTP": 0.36, "IDt": 0, "IDa": 34, "IDm": 0, "HOTA": 0.001}

I think that is strange because there shouldn't be 487570 false negatives. I would really appreciate some help to fix this problem. Thanks in advance and for completeness my config:

model = dict(
    detector=dict(
        type='FasterRCNN',
        backbone=dict(
            type='ResNet',
            depth=50,
            num_stages=4,
            out_indices=(0, 1, 2, 3),
            frozen_stages=1,
            norm_cfg=dict(type='BN', requires_grad=True),
            norm_eval=True,
            style='pytorch',
            init_cfg=dict(
                type='Pretrained', checkpoint='torchvision://resnet50')),
        neck=dict(
            type='FPN',
            in_channels=[256, 512, 1024, 2048],
            out_channels=256,
            num_outs=5),
        rpn_head=dict(
            type='RPNHead',
            in_channels=256,
            feat_channels=256,
            anchor_generator=dict(
                type='AnchorGenerator',
                scales=[8],
                ratios=[0.5, 1.0, 2.0],
                strides=[4, 8, 16, 32, 64]),
            bbox_coder=dict(
                type='DeltaXYWHBBoxCoder',
                target_means=[0.0, 0.0, 0.0, 0.0],
                target_stds=[1.0, 1.0, 1.0, 1.0],
                clip_border=False),
            loss_cls=dict(
                type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
            loss_bbox=dict(
                type='SmoothL1Loss', beta=0.1111111111111111,
                loss_weight=1.0)),
        roi_head=dict(
            type='StandardRoIHead',
            bbox_roi_extractor=dict(
                type='SingleRoIExtractor',
                roi_layer=dict(
                    type='RoIAlign', output_size=7, sampling_ratio=0),
                out_channels=256,
                featmap_strides=[4, 8, 16, 32]),
            bbox_head=dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=1,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0.0, 0.0, 0.0, 0.0],
                    target_stds=[0.1, 0.1, 0.2, 0.2],
                    clip_border=False),
                reg_class_agnostic=False,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', loss_weight=1.0))),
        train_cfg=dict(
            rpn=dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.7,
                    neg_iou_thr=0.3,
                    min_pos_iou=0.3,
                    match_low_quality=True,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=256,
                    pos_fraction=0.5,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=False),
                allowed_border=-1,
                pos_weight=-1,
                debug=False),
            rpn_proposal=dict(
                nms_pre=2000,
                max_per_img=1000,
                nms=dict(type='nms', iou_threshold=0.7),
                min_bbox_size=0),
            rcnn=dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.5,
                    neg_iou_thr=0.5,
                    min_pos_iou=0.5,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                pos_weight=-1,
                debug=False)),
        test_cfg=dict(
            rpn=dict(
                nms_pre=1000,
                max_per_img=1000,
                nms=dict(type='nms', iou_threshold=0.7),
                min_bbox_size=0),
            rcnn=dict(
                score_thr=0.05,
                nms=dict(type='nms', iou_threshold=0.5),
                max_per_img=100)),
        init_cfg=dict(
            type='Pretrained',
            checkpoint=
            'https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-ffa52ae7.pth'
        )),
    type='DeepSORT',
    motion=dict(type='KalmanFilter', center_only=False),
    tracker=dict(
        type='SortTracker',
        obj_score_thr=0.5,
        match_iou_thr=0.5,
        reid=None,
        num_tentatives=3,
        num_frames_retain=10))
dataset_type = 'MOTChallengeDataset'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadMultiImagesFromFile', to_float32=True),
    dict(type='SeqLoadAnnotations', with_bbox=True, with_track=True),
    dict(
        type='SeqResize',
        img_scale=(1088, 1088),
        share_params=True,
        ratio_range=(0.8, 1.2),
        keep_ratio=True,
        bbox_clip_border=False),
    dict(type='SeqPhotoMetricDistortion', share_params=True),
    dict(
        type='SeqRandomCrop',
        share_params=False,
        crop_size=(1088, 1088),
        bbox_clip_border=False),
    dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.5),
    dict(
        type='SeqNormalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='SeqPad', size_divisor=32),
    dict(type='MatchInstances', skip_nomatch=True),
    dict(
        type='VideoCollect',
        keys=[
            'img', 'gt_bboxes', 'gt_labels', 'gt_match_indices',
            'gt_instance_ids'
        ]),
    dict(type='SeqDefaultFormatBundle', ref_prefix='ref')
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadDetections'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1088, 1088),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='VideoCollect', keys=['img', 'public_bboxes'])
        ])
]
data_root = '/home/sim/data/penguins/'
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='MOTChallengeDataset',
        visibility_thr=-1,
        ann_file='/home/sim/data/penguins/annotations/train_cocoformat.json',
        img_prefix='/home/sim/data/penguins/train',
        ref_img_sampler=dict(
            num_ref_imgs=1,
            frame_range=10,
            filter_key_img=True,
            method='uniform'),
        pipeline=[
            dict(type='LoadMultiImagesFromFile', to_float32=True),
            dict(type='SeqLoadAnnotations', with_bbox=True, with_track=True),
            dict(
                type='SeqResize',
                img_scale=(1088, 1088),
                share_params=True,
                ratio_range=(0.8, 1.2),
                keep_ratio=True,
                bbox_clip_border=False),
            dict(type='SeqPhotoMetricDistortion', share_params=True),
            dict(
                type='SeqRandomCrop',
                share_params=False,
                crop_size=(1088, 1088),
                bbox_clip_border=False),
            dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.5),
            dict(
                type='SeqNormalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='SeqPad', size_divisor=32),
            dict(type='MatchInstances', skip_nomatch=True),
            dict(
                type='VideoCollect',
                keys=[
                    'img', 'gt_bboxes', 'gt_labels', 'gt_match_indices',
                    'gt_instance_ids'
                ]),
            dict(type='SeqDefaultFormatBundle', ref_prefix='ref')
        ],
        detection_file='/home/sim/data/penguins/annotations/train_det.pkl'),
    val=dict(
        type='MOTChallengeDataset',
        ann_file='data/MOT17/annotations/train_cocoformat.json',
        img_prefix='data/MOT17/train',
        ref_img_sampler=None,
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadDetections'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1088, 1088),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='VideoCollect', keys=['img', 'public_bboxes'])
                ])
        ],
        detection_file='data/MOT17/annotations/train_detections.pkl'),
    test=dict(
        type='MOTChallengeDataset',
        ann_file='/home/sim/data/penguins/annotations/train_cocoformat.json',
        img_prefix='/home/sim/data/penguins/train',
        ref_img_sampler=None,
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadDetections'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1088, 1088),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='VideoCollect', keys=['img', 'public_bboxes'])
                ])
        ],
        detection_file='/home/sim/data/penguins/annotations/train_det.pkl'))
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
opencv_num_threads = 0
mp_start_method = 'fork'
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=100,
    warmup_ratio=0.01,
    step=[3])
total_epochs = 1
evaluation = dict(metric=['bbox', 'track'], interval=1)
search_metrics = ['MOTA', 'IDF1', 'FN', 'FP', 'IDs', 'MT', 'ML']
test_set = 'train'
JingweiZhang12 commented 2 years ago

Are you sure the ground truth boxes loaded from the file are in the right format? You can check it in prepare_data function in CocoVideoDataset.

NikoRohr commented 2 years ago

Are you sure the ground truth boxes loaded from the file are in the right format? You can check it in prepare_data function in CocoVideoDataset.

It seems that everything is correctly loaded. I added:

with open('prepare_data_results.txt', 'a+') as f:
    print(results, file=f)

The resulting file is 47MB, so I extracted two samples:

{'img_info': {'id': 1, 'video_id': 1, 'file_name': 'EQ0A4452/img1/000001.jpg', 'height': 1080, 'width': 1920, 'frame_id': 0, 'mot_frame_id': 1, 'filename': 'EQ0A4452/img1/000001.jpg'}, 'img_prefix': '/home/sim/data/penguins/train', 'seg_prefix': None, 'proposal_file': None, 'bbox_fields': [], 'mask_fields': [], 'seg_fields': [], 'is_video_data': True, 'detections': [array([[1.125e+03, 7.500e+02, 1.175e+03, 7.950e+02, 1.000e+00],
       [4.930e+02, 6.800e+02, 5.240e+02, 6.960e+02, 1.000e+00],
       [1.687e+03, 7.660e+02, 1.735e+03, 8.120e+02, 1.000e+00],
       [6.080e+02, 6.750e+02, 6.400e+02, 7.260e+02, 1.000e+00],
       [9.090e+02, 6.610e+02, 9.380e+02, 7.100e+02, 1.000e+00],
       [5.460e+02, 6.850e+02, 5.800e+02, 7.190e+02, 1.000e+00],
       [2.740e+02, 7.440e+02, 3.400e+02, 7.860e+02, 1.000e+00],
       [5.710e+02, 6.800e+02, 6.070e+02, 7.090e+02, 1.000e+00],
       [3.960e+02, 6.760e+02, 4.380e+02, 7.470e+02, 1.000e+00],
       [3.990e+02, 6.740e+02, 4.290e+02, 7.070e+02, 1.000e+00],
       [4.680e+02, 6.640e+02, 4.920e+02, 6.950e+02, 1.000e+00],
       [3.730e+02, 6.710e+02, 3.930e+02, 6.990e+02, 1.000e+00],
       [7.710e+02, 6.890e+02, 8.080e+02, 7.180e+02, 1.000e+00],
       [3.800e+02, 6.580e+02, 3.950e+02, 6.820e+02, 1.000e+00],
       [4.770e+02, 7.090e+02, 5.190e+02, 7.820e+02, 1.000e+00],
       [3.850e+02, 6.740e+02, 4.040e+02, 7.000e+02, 1.000e+00]])]}

and

{'img_info': {'id': 12995, 'video_id': 1, 'file_name': 'EQ0A4452/img1/012995.jpg', 'height': 1080, 'width': 1920, 'frame_id': 12994, 'mot_frame_id': 12995, 'filename': 'EQ0A4452/img1/012995.jpg'}, 'img_prefix': '/home/sim/data/penguins/train', 'seg_prefix': None, 'proposal_file': None, 'bbox_fields': [], 'mask_fields': [], 'seg_fields': [], 'is_video_data': True, 'detections': [array([[7.020e+02, 6.830e+02, 7.420e+02, 7.440e+02, 1.000e+00],
       [2.670e+02, 7.330e+02, 3.410e+02, 7.860e+02, 1.000e+00],
       [4.680e+02, 6.580e+02, 4.950e+02, 6.950e+02, 1.000e+00],
       [2.940e+02, 7.060e+02, 3.370e+02, 7.560e+02, 1.000e+00],
       [7.690e+02, 6.920e+02, 8.150e+02, 7.200e+02, 1.000e+00],
       [1.233e+03, 7.370e+02, 1.300e+03, 8.230e+02, 1.000e+00],
       [4.380e+02, 6.660e+02, 4.700e+02, 7.030e+02, 1.000e+00],
       [3.890e+02, 6.830e+02, 4.100e+02, 7.030e+02, 1.000e+00],
       [3.880e+02, 6.600e+02, 4.120e+02, 6.930e+02, 1.000e+00],
       [7.240e+02, 1.026e+03, 8.370e+02, 1.080e+03, 0.000e+00],
       [3.820e+02, 6.610e+02, 3.930e+02, 6.830e+02, 1.000e+00],
       [1.670e+03, 7.580e+02, 1.731e+03, 8.120e+02, 1.000e+00],
       [5.250e+02, 6.690e+02, 5.580e+02, 7.110e+02, 1.000e+00],
       [6.730e+02, 6.790e+02, 7.060e+02, 7.210e+02, 1.000e+00],
       [3.670e+02, 6.700e+02, 3.950e+02, 7.020e+02, 1.000e+00],
       [1.650e+02, 6.750e+02, 2.080e+02, 7.170e+02, 1.000e+00],
       [8.990e+02, 6.530e+02, 9.410e+02, 7.130e+02, 1.000e+00],
       [4.760e+02, 7.060e+02, 5.190e+02, 7.780e+02, 1.000e+00],
       [3.870e+02, 6.730e+02, 4.360e+02, 7.530e+02, 1.000e+00],
       [4.930e+02, 6.790e+02, 5.230e+02, 6.950e+02, 1.000e+00],
       [5.620e+02, 6.930e+02, 5.850e+02, 7.140e+02, 0.000e+00]])]}

Is it possible that I have to change test.pipline.img_scale to my actual video size (1080,1920)?

JingweiZhang12 commented 2 years ago

The image_scale has little influence on the performance. I guess it's the fault of CLASSES which you forget to change it to the true class you want to evaluate. https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/mot_challenge_dataset.py#L504

NikoRohr commented 2 years ago

I changed it in mot_challenge_dataset.py but now I get the error:

Traceback (most recent call last):
  File "../mmtracking/tools/test.py", line 225, in <module>
    main()
  File "../mmtracking/tools/test.py", line 215, in main
    metric = dataset.evaluate(outputs, **eval_kwargs)
  File "/home/sim/mmtracking/mmtrack/datasets/mot_challenge_dataset.py", line 459, in evaluate
    dataset = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
  File "/home/sim/env_mmtracking/lib/python3.8/site-packages/trackeval/datasets/mo
  t_challenge_2d_box.py", line 75, in __init__
     raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.')
  trackeval.utils.TrackEvalException: Attempted to evaluate an invalid class. Only pedestrian class is valid.

How to change the class to evaluate?

JingweiZhang12 commented 2 years ago

File "/home/sim/env_mmtracking/lib/python3.8/site-packages/trackeval/datasets/mo t_challenge_2d_box.py",. You installed the mmtracking not on editing mode. Do you reinstall mmtracking after modification? Suggest to install mmtracking on editing mode if you want to modify the code.

NikoRohr commented 2 years ago

The image_scale has little influence on the performance. I guess it's the fault of CLASSES which you forget to change it to the true class you want to evaluate. https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/mot_challenge_dataset.py#L504

I dont think that is the problem. I changed my dataset to have just the class pedestrian, so I dont have to modify any code of mmtracking or trackeval. Still the recall is like before. I also mentioned that it is not zero there are 399 true positives and I really dont know why just 399. I also did a param search with:

model = dict(
    detector=dict(
        ... # not important because of the usage of public detections
        )
    type='DeepSORT',
    motion=dict(type='KalmanFilter', center_only=False),
    tracker=dict(
        type='SortTracker',
        obj_score_thr=0.5,
        match_iou_thr=0.5,
        reid=None,
        num_tentatives=[1, 100],
        num_frames_retain=[10, 1000, 10000]))

Still the results are the same for each run.

I believe there must be a mistake in the config of sort because in the resulting csv file there are just 5057 non empty entries for det_bboxes and 399 for track_bboxes.

NikoRohr commented 2 years ago

Hey, I am still working on this issue. I tried another SORT implementation with the exact same detection files and getting reasonable results. So it has to be something with the config file or the implementation itself.

I would really appreciate if someone can explain the reason causing this behavior.

NikoRohr commented 2 years ago

Are there any thresholds specifying ratio, width or height of considered bboxes? I have small bboxes with width in maximum 250 and height 130. Compared to the MOT17 dataset it is much less. Maybe this can explain the behavior?