GeekAlexis / FastMOT

High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀
MIT License
1.12k stars 256 forks source link

Incorrect Bounding Boxes and change of vehicle ids when using custom weights #265

Closed SpartacusVipul closed 10 months ago

SpartacusVipul commented 10 months ago

I was able to build FastMOT and run it for crowshuman weights When I changed the weights to my custom weights, which was trained using darknet on yolov4, I am getting below issues

  1. The bounding boxes on the vehicles are bigger than expected
  2. In the freeway section, the same vehicle is being marked with different ids
  3. In the ramp section, the same vehicle has multiple bounding boxes https://github.com/GeekAlexis/FastMOT/assets/14127151/2d61c858-f751-4104-937e-10de1ad3d58a

Please find below my mot_custom.json file, cfg file and Yolo class {
"resize_to": [1280, 720],

"stream_cfg": {
    "resolution": [1920, 1080],
    "frame_rate": 30,
    "buffer_size": 10
},

"mot_cfg": {
    "detector_type": "YOLO",
    "detector_frame_skip": 10,
    "class_ids": [0, 1],

    "ssd_detector_cfg": {
        "model": "SSDInceptionV2",
        "tile_overlap": 0.25,
        "tiling_grid": [4, 2],
        "conf_thresh": 0.5,
        "merge_thresh": 0.6,
        "max_area": 120000
    },
    "yolo_detector_cfg": {
        "model": "YOLOv4Custom",
        "conf_thresh": 0.25,
        "nms_thresh": 0.5,
        "max_area": 800000,
        "min_aspect_ratio": 1.2
    },
    "public_detector_cfg": {
        "sequence_path": "MOT20/train/MOT20-01",
        "conf_thresh": 0.5,
        "max_area": 800000
    },

    "feature_extractor_cfgs": [
        {
            "model": "OSNet025",
            "batch_size": 16
        },
        {
            "model": "OSNet025",
            "batch_size": 16
        }
    ],

    "tracker_cfg": {
        "max_age": 6,
        "age_penalty": 2,
        "motion_weight": 0.2,
        "max_assoc_cost": 0.8,
        "max_reid_cost": 0.6,
        "iou_thresh": 0.4,
        "duplicate_thresh": 0.8,
        "occlusion_thresh": 0.7,
        "conf_thresh": 0.5,
        "confirm_hits": 1,
        "history_size": 50,

        "kalman_filter_cfg": {
            "std_factor_acc": 2.25,
            "std_offset_acc": 78.5,
            "std_factor_det": [0.08, 0.08],
            "std_factor_klt": [0.14, 0.14],
            "min_std_det": [4.0, 4.0],
            "min_std_klt": [5.0, 5.0],
            "init_pos_weight": 5,
            "init_vel_weight": 12,
            "vel_coupling": 0.6,
            "vel_half_life": 2
        },

        "flow_cfg": {
            "bg_feat_scale_factor": [0.1, 0.1],
            "opt_flow_scale_factor": [0.5, 0.5],
            "feat_density": 0.005,
            "feat_dist_factor": 0.06,
            "ransac_max_iter": 500,
            "ransac_conf": 0.99,
            "max_error": 100,
            "inlier_thresh": 4,
            "bg_feat_thresh": 10,
            "obj_feat_params": {
                "maxCorners": 1000,
                "qualityLevel": 0.06,
                "blockSize": 3
            },
            "opt_flow_params": {
                "winSize": [5, 5],
                "maxLevel": 5,
                "criteria": [3, 10, 0.03]
            }
        }
    },

    "visualizer_cfg": {
        "draw_detections": false,
        "draw_confidence": false,
        "draw_covariance": false,
        "draw_klt": false,
        "draw_obj_flow": false,
        "draw_bg_flow": false,
        "draw_trajectory": false

    }
}

}

[net]

Testing

batch=1

subdivisions=1

Training

batch=64 subdivisions=64 width=640 height=640 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

learning_rate=0.00065 burn_in=2000 max_batches = 6000 policy=steps steps=4800,5400 scales=.1,.1

cutmix=1

mosaic=1

:104x104 54:52x52 85:26x26 104:13x13 for 416

[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-7

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-10

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-28

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-28

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

Downsample

[convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[route] layers = -2

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish

[shortcut] from=-3 activation=linear

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish

[route] layers = -1,-16

[convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish stopbackward=800

##########################

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

SPP

[maxpool] stride=1 size=5

[route] layers=-2

[maxpool] stride=1 size=9

[route] layers=-4

[maxpool] stride=1 size=13

[route] layers=-1,-3,-5,-6

End SPP

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[upsample] stride=2

[route] layers = 85

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[route] layers = -1, -3

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[upsample] stride=2

[route] layers = 54

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[route] layers = -1, -3

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

##########################

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

[route] layers = -4

[convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky

[route] layers = -1, -16

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

[route] layers = -4

[convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky

[route] layers = -1, -37

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=21 activation=linear

[yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=2 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5

class YOLOv4Custom(YOLO): ENGINE_PATH = Path(file).parent / 'yolov4-1256-640_radish_5000.trt' MODEL_PATH = Path(file).parent / 'yolov4-1256-640_radish_5000.onnx' NUM_CLASSES = 2 INPUT_SHAPE = (3, 640, 640) LAYER_FACTORS = [8, 16, 32] SCALES = [1.2, 1.1, 1.05] ANCHORS = [[12,16, 19,36, 40,28], [36,75, 76,55, 72,146], [142,110, 192,243, 459,401]]

SpartacusVipul commented 10 months ago

If anyone else is having similar issue, need to change parameters in the mot.json follow issues #91 #181 #76