ucas-vg / TOV_mmdetection

Include mmdetection version of TinyBenchmark. Official link.
https://github.com/ucas-vg/TinyBenchmark
Apache License 2.0
34 stars 2 forks source link

Run foveabox #3

Closed Hshuqin closed 2 years ago

Hshuqin commented 3 years ago

Hi, I also use foveabox to run this dataset, but foveabox does not support test-time augmentations; instead, I use the function simple_test_bboxes (in dense_test_mixins.py), but the type of input does not match, so I rewrite the function, but when testing, it shows null results,I would like to know what I have to do to get this algorithm up and running and have results, I hope you can give me some advice, thanks!

I first simply bbox the elements in the tuples in turn, but the test results are empty

results_list = []
for i, feat in enumerate(feats):

    outs = self.forward(feat)

    results_list += self.get_bboxes(*outs, img_metas[i], rescale=rescale)

return results_list

Then I modified the code again, but it will report the following error error:get_bboxes() got multiple values for argument 'rescale'

outs = []
for feat in feats:
    outs.append(self.forward(feat))
results_list = self.get_bboxes(*outs, img_metas, rescale=rescale)
return results_list

Finally I solved the problem with the following code changes,but also reported a new error, I do not know if it makes sense to continue to modify, I don't know what's wrong with it. :error: File "/home/xxxx/TOV_mmdetection-main/mmdet/models/dense_heads/fovea_head.py", line 277, in featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] AttributeError: 'list' object has no attribute 'size'

cls_scores = []
bbox_preds = []

for feat in feats:
    cls_score, bbox_pred = self.forward(feat)
    cls_scores.append(cls_score)
    bbox_preds.append(bbox_pred)

results_list = self.get_bboxes(cls_scores, bbox_preds, img_metas, rescale=rescale)
return results_list
Hshuqin commented 3 years ago

I found that under this test_pipeline, it is not possible to execute a single test because 35 cropped images are tested together each time; so I let foveabox also execute aug_test, but made changes in the get_bbox function as follows,

in _get_bboxes_single function

# add before return
det_scores = torch.unsqueeze(det_scores, 0)
det_bboxes = torch.unsqueeze(det_bboxes, 0)
det_results = [
                tuple(mlvl_bs)
                for mlvl_bs in zip(det_bboxes, det_scores)
            ]
# delete function multiclass_nms
# change return
return det_results  

in get_bboxes function

# change return
return result_list[0]  # return result_list

That is, the output of get_bbox is modified to fit the input of the aug_test function so that it can run; (I refer to the corresponding input and output of the function in fcos_head) But the test result is empty, and I'm not sure if I'm missing something

yinglang commented 3 years ago

The code of run-time cropping for eavluation(it is not support by origin mmdetection) is difficult to read and modify. For now, I only test it on evaluation of faster, retiannet, FCOS, RepPoint. It would take lots of time to debug on other framework. So it would not be support reccently.

But we have updated code to provide another way to evaluate on other framework. you can modified config like following

test_pipeline = [
    ...
    dict(
        type='MultiScaleFlipAug',
        scale_factor=[1.0],
        flip=False,
    ...
    )
]

data = dict(
    val=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json',
        merge_after_infer_kwargs=dict(
            merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json',
            merge_nms_th=0.5
        ),
        img_prefix=data_root + 'test/',
        pipeline=test_pipeline),
    test=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json',
        merge_after_infer_kwargs=dict(
            merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json',
            merge_nms_th=0.5
        ),
        img_prefix=data_root + 'test/',
        pipeline=test_pipeline)
)

model = dict(
    ...
    test_cfg=dict(
        nms_pre=1000,     # or try other setting
        max_per_img=500  # or set as -1 or other
    ...
)

More detail can see on offline crop evaluation.

Hshuqin commented 3 years ago

Thank you for your reply, I just re-pulled TOV_mmdetection and ran fcos, modified the relevant configuration as you said, but during the evalution, the result never came out, neither the result was given nor the error was reported to launch.

------------------ 原始邮件 ------------------ 发件人: "ucas-vg/TOV_mmdetection" @.>; 发送时间: 2021年7月8日(星期四) 下午3:11 @.>; @.**@.>; 主题: Re: [ucas-vg/TOV_mmdetection] Run foveabox (#3)

The code of run-time cropping for eavluation(it is not support by origin mmdetection) is difficult to read and modify. For now, I only test it on evaluation of faster, retiannet, FCOS, RepPoint. It would take lots of time to debug on other framework. So it would not be support reccently.

But we have updated code to provide another way to evaluate on other framework. you can modified config like following test_pipeline = [ ... dict( type='MultiScaleFlipAug', scale_factor=[1.0], flip=False, ... ) ] data = dict( val=dict( type=dataset_type, ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json', merge_after_infer_kwargs=dict( merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json', merge_nms_th=0.5 ), img_prefix=data_root + 'test/', pipeline=test_pipeline), test=dict( type=dataset_type, ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json', merge_after_infer_kwargs=dict( merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json', merge_nms_th=0.5 ), img_prefix=data_root + 'test/', pipeline=test_pipeline) ) model = dict( ... test_cfg=dict( nms_pre=1000, # or try other setting max_per_img=500 # or set as -1 or other ... )
More detail can see on offline crop evaluation.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

Hshuqin commented 3 years ago

Just now, I ran the fcos finally reported an error, reported as folows:ModuleNotFoundError: No module named 'mini_maskrcnn_benchmark' Am I missing any files?

------------------ 原始邮件 ------------------ 发件人: "ucas-vg/TOV_mmdetection" @.>; 发送时间: 2021年7月8日(星期四) 下午3:11 @.>; @.**@.>; 主题: Re: [ucas-vg/TOV_mmdetection] Run foveabox (#3)

The code of run-time cropping for eavluation(it is not support by origin mmdetection) is difficult to read and modify. For now, I only test it on evaluation of faster, retiannet, FCOS, RepPoint. It would take lots of time to debug on other framework. So it would not be support reccently.

But we have updated code to provide another way to evaluate on other framework. you can modified config like following test_pipeline = [ ... dict( type='MultiScaleFlipAug', scale_factor=[1.0], flip=False, ... ) ] data = dict( val=dict( type=dataset_type, ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json', merge_after_infer_kwargs=dict( merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json', merge_nms_th=0.5 ), img_prefix=data_root + 'test/', pipeline=test_pipeline), test=dict( type=dataset_type, ann_file=data_root + 'annotations/corner/task/tiny_set_test_sw640_sh512_all.json', merge_after_infer_kwargs=dict( merge_gt_file=data_root + 'mini_annotations/tiny_set_test_all.json', merge_nms_th=0.5 ), img_prefix=data_root + 'test/', pipeline=test_pipeline) ) model = dict( ... test_cfg=dict( nms_pre=1000, # or try other setting max_per_img=500 # or set as -1 or other ... )
More detail can see on offline crop evaluation.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

yinglang commented 3 years ago

Very thanks for your feedback. Sorry for that I miss the install instruction. To use offline crop evaluation, you need install mini_maskrcnn_benchmark first as follow:

cd huicv/deps/mini_maskrcnn_benchmark
python setup.py build develop

or

cd huicv
bash install.sh
yinglang commented 3 years ago

Thank you for your reply, I just re-pulled TOV_mmdetection and ran fcos, modified the relevant configuration as you said, but during the evalution, the result never came out, neither the result was given nor the error was reported to launch.

Offline crop evaluation need to merge the results from each sub image firstly. For now the mergeing operation can only run on CPU, so it maybe time-consuming. We may try to fixed it latter. there are another way to make it faster. Given a smaller max_per_img. Such as

model = dict(
    ...
    test_cfg=dict(
        nms_pre=1000,     # or try other setting
        max_per_img=200  # 500 # or set as -1 or other
    ...
)

But it may have little inference for evaluation results(a bit lower).

Hshuqin commented 3 years ago

Hi, I have run the foveabox and also modified the code to successfully apply TTA in it, and the performance is about the same as the related algorithm. Thanks for your answers and help during this time.

------------------ 原始邮件 ------------------ 发件人: "ucas-vg/TOV_mmdetection" @.>; 发送时间: 2021年7月15日(星期四) 中午11:36 @.>; @.**@.>; 主题: Re: [ucas-vg/TOV_mmdetection] Run foveabox (#3)

Thank you for your reply, I just re-pulled TOV_mmdetection and ran fcos, modified the relevant configuration as you said, but during the evalution, the result never came out, neither the result was given nor the error was reported to launch.

Offline crop evaluation need to merge the results from each sub image firstly. For now the mergeing operation can only run on CPU, so it maybe time-consuming. We may try to fixed it latter. there are another way to make it faster. Given a smaller max_per_img. Such as model = dict( ... test_cfg=dict( nms_pre=1000, # or try other setting max_per_img=200 # 500 # or set as -1 or other ... )
But it may have little inference for evaluation results(a bit lower).

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.