zhaoweicai / cascade-rcnn

Caffe implementation of multiple popular object detection frameworks
1.04k stars 293 forks source link

don't remove redundant high IOU boxes in DecodeBBox operator #27

Open wenhe-jia opened 6 years ago

wenhe-jia commented 6 years ago

I tried to transplant cascade RCNN to Detectron(using COCO2017train+val as training dataset), I haven't remove redundant high IOU boxes in DecodeBBox operator, the results came out not as expected. Details are as below:

2018-06-22 5 13 58

I wonder whether this operation would bring bad effects to the detection performance, can you give me any advice? @zhaoweicai

GuoxingYan commented 6 years ago

你复现的是哪一个?作者的小tricks还是挺多的。

wenhe-jia commented 6 years ago

@GuoxingYan I just add DecodeBBoxOp and DIstributeFpnRpnProposalsOp on Detectron to train MaskRCNN, the results showed above are not reliable, because a lot of details are not implemented, such as cls_agnostic_bbox_reg and settings of lr_mult. I am training new model now. I already screen out high IOU bboxes in DecodeBBoxOp

zhaoweicai commented 6 years ago

Whether removing high IOU boxes won't have significant differences. We have Detectron version of cascade r-cnn. The results are very consistent even for very high baselines. The Detectron-Cascade-RCNN code will be released later.

wenhe-jia commented 6 years ago

Thank you very much! Looking forward to your later release @zhaoweicai I have added weighted loss and learning rate for different RCNN stages. But compared to the open source of Detectron, I still just get 1.8% improvement resulted by single RCNN stage3 on object detection task, not at least 3% as expected. Using 4 GPUs and 2 image per GPU, I use coco_2017_train to train the model with max iterations 180000, the learning rate starts at 0.01, decreased to 0.001 at 120000 iterations and 0.0001 at 160000 iterations, codes are as below.

weighted RCNN loss:

def add_cascade_fast_rcnn_losses(model, stage_num):
    """Add losses for RoI classification and bounding box regression."""
    if stage_num == 1:
        cls_prob, loss_cls = model.net.SoftmaxWithLoss(
            ['cls_score_1st', 'labels_int32_1st'], ['cls_prob_1st', 'loss_cls_1st'],
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE1  # 1.0 
        )
        loss_bbox = model.net.SmoothL1Loss(
            [
                'bbox_pred_1st', 'bbox_targets_1st', 'bbox_inside_weights_1st',
                'bbox_outside_weights_1st'
            ],
            'loss_bbox_1st',
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE1  # 1.0
        )
        loss_gradients = blob_utils.get_loss_gradients(model, [loss_cls, loss_bbox])
        model.Accuracy(['cls_prob_1st', 'labels_int32_1st'], 'accuracy_cls_1st')
        model.AddLosses(['loss_cls_1st', 'loss_bbox_1st'])
        model.AddMetrics('accuracy_cls_1st')

    elif stage_num == 2:
        cls_prob, loss_cls = model.net.SoftmaxWithLoss(
            ['cls_score_2nd', 'labels_int32_2nd'], ['cls_prob_2nd', 'loss_cls_2nd'],
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE2  # 0.5
        )
        loss_bbox = model.net.SmoothL1Loss(
            [
                'bbox_pred_2nd', 'bbox_targets_2nd', 'bbox_inside_weights_2nd',
                'bbox_outside_weights_2nd'
            ],
            'loss_bbox_2nd',
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE2  # 0.5
        )
        loss_gradients = blob_utils.get_loss_gradients(model, [loss_cls, loss_bbox])
        model.Accuracy(['cls_prob_2nd', 'labels_int32_2nd'], 'accuracy_cls_2nd')
        model.AddLosses(['loss_cls_2nd', 'loss_bbox_2nd'])
        model.AddMetrics('accuracy_cls_2nd')

    elif stage_num == 3:
        cls_prob, loss_cls = model.net.SoftmaxWithLoss(
            ['cls_score_3rd', 'labels_int32_3rd'], ['cls_prob_3rd', 'loss_cls_3rd'],
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE3  # 0.25
        )
        loss_bbox = model.net.SmoothL1Loss(
            [
                'bbox_pred_3rd', 'bbox_targets_3rd', 'bbox_inside_weights_3rd',
                'bbox_outside_weights_3rd'
            ],
            'loss_bbox_3rd',
            scale=model.GetLossScale() * cfg.CASCADERCNN.WEIGHT_LOSS_BBOX_STAGE3  # 0.25
        )
        loss_gradients = blob_utils.get_loss_gradients(model, [loss_cls, loss_bbox])
        model.Accuracy(['cls_prob_3rd', 'labels_int32_3rd'], 'accuracy_cls_3rd')
        model.AddLosses(['loss_cls_3rd', 'loss_bbox_3rd'])
        model.AddMetrics('accuracy_cls_3rd')

    return loss_gradients

different learning rate:

def add_single_gpu_param_update_ops(model, gpu_id):
    # Learning rate of 0 is a dummy value to be set properly at the
    # start of training
    lr = model.param_init_net.ConstantFill(
        [], 'lr', shape=[1], value=0.0
    )
    one = model.param_init_net.ConstantFill(
        [], 'one', shape=[1], value=1.0
    )
    wd = model.param_init_net.ConstantFill(
        [], 'wd', shape=[1], value=cfg.SOLVER.WEIGHT_DECAY
    )
    # weight decay of GroupNorm's parameters
    wd_gn = model.param_init_net.ConstantFill(
        [], 'wd_gn', shape=[1], value=cfg.SOLVER.WEIGHT_DECAY_GN
    )
    for param in model.TrainableParams(gpu_id=gpu_id):
        logger.debug('param ' + str(param) + ' will be updated')
        param_grad = model.param_to_grad[param]
        # Initialize momentum vector
        param_momentum = model.param_init_net.ConstantFill(
            [param], param + '_momentum', value=0.0
        )

        # Use higher learning rate for Cascade RCNN
        # Use 2x higher learning rate for RCNN stage2
        if 'fc1_2nd' in str(param):
            model.Scale(param_grad, param_grad, scale=2.0)
        elif 'fc2_2nd' in str(param):
            model.Scale(param_grad, param_grad, scale=2.0)
        elif 'cls_score_2nd' in str(param):
            model.Scale(param_grad, param_grad, scale=2.0)
        elif 'bbox_pred_2nd' in str(param):
            model.Scale(param_grad, param_grad, scale=2.0)
        # Use 4x higher learning rate for RCNN stage3
        if 'fc1_3rd' in str(param):
            model.Scale(param_grad, param_grad, scale=4.0)
        elif 'fc2_3rd' in str(param):
            model.Scale(param_grad, param_grad, scale=4.0)
        elif 'cls_score_3rd' in str(param):
            model.Scale(param_grad, param_grad, scale=4.0)
        elif 'bbox_pred_3rd' in str(param):
            model.Scale(param_grad, param_grad, scale=4.0)

        if param in model.biases:
            # Special treatment for biases (mainly to match historical impl.
            # details):
            # (1) Do not apply weight decay
            # (2) Use a 2x higher learning rate
            model.Scale(param_grad, param_grad, scale=2.0)
        elif param in model.gn_params:
            # Special treatment for GroupNorm's parameters
            model.WeightedSum([param_grad, one, param, wd_gn], param_grad)
        elif cfg.SOLVER.WEIGHT_DECAY > 0:
            # Apply weight decay to non-bias weights
            model.WeightedSum([param_grad, one, param, wd], param_grad)
        # Update param_grad and param_momentum in place
        model.net.MomentumSGDUpdate(
            [param_grad, param_momentum, lr, param],
            [param_grad, param_momentum, param],
            momentum=cfg.SOLVER.MOMENTUM
        )

My final results are showed in the table.

experiments dataset box_ap box_ap50 box_ap75 box_ap small box_ap medium box_ap large mask_ap mask_ap50 mask_ap75 mask_ap small mask_ap medium mask_ap large
mask-R50 test-dev(val) 38.0%(37.7%) 59.7% 41.3% 21.2% 40.2% 48.1% 33.9%          
cascade stage1 test-dev 36.8% 58.1% 40.0% 20.3% 39.0% 47.2% 33.5% 54.9% 35.4% 14.3% 35.2% 48.2%
cascade stage2 test-dev 38.9% 58.6% 42.8% 21.0% 40.9% 50.5% 34.4% 55.6% 36.6% 14.5% 36.0% 50.2%
cascade stage3 test-dev 38.9% 57.4% 43.1% 20.8% 40.8% 51.0% 34.3% 54.7% 36.7% 14.4% 35.8% 50.0%
cascade stage 1~2 test-dev 38.9% 59.0% 42.7% 21.3% 41.0% 50.5% 34.4% 55.8% 36.5% 14.6% 36.0% 50.3%
cascade stage 1~3 test-dev(val) 39.5%(39.14%) 58.9%(58.36%) 43.4%(42.85) 21.5%(21.41%) 41.4%(41.52%) 51.3%(53.03%) 34.6%(34.37%) 55.8%(55.22%) 36.8%(36.57%) 14.8%(15.17%) 36.2%(36.5%) 50. 4%(52.09%)

I haven't change any part of Detectron before CollectAndDistributFpnRpnProposals Op (backbone and RPN part). Can you give some suggestions? What other implement details should I pay attention? Thank you again!

wenhe-jia commented 6 years ago

@zhaoweicai I noticed that the AP of my first stage are much lower than those of baseline, and the AP of low IOU threshold stay low alone with stage2 and stage3, what possible reasons may result in this? Can you help me out?