Chasel-Tsui / mmdet-rfla

ECCV22: RFLA
MIT License
262 stars 23 forks source link

About the results of faster-rcnn on the AI-TOD dataset #4

Closed kwwcv closed 2 years ago

kwwcv commented 2 years ago

I notice that faster-rcnn gets 11.1 AP in Table.5, but I wonder if it is caused by ill-suited anchor sizes since the default scales of the anchor_generator is set to 8 by the mmdetection and the smallest generated anchor size will be much larger than the small objects, e.g., $16\times16$. So I re-run the exps of faster-rcnn with scales of 1,2,3,4,8 on the AI-TOD, the results are shown below: rfla_exps It can be found that the faster-rcnn can actually get 20.9 AP on AI-TOD which is much higher than 11.1 AP on the paper and comparable with rfla (AP 21.1).

Chasel-Tsui commented 2 years ago

Thanks for your comment and happy mid-autumn festival. In the paper, the setting of compared results in Tab.5 refer to the benchmark paper, we have also tuned the anchor scale and the results are shown in Fig.3, where the an anchor size of 4 can yield the best performance, which is slightly lower than the faster with rfla. Nevertheless, it is worth noting that, the rfla is quite robust to the parameter setting k, it can yield a robust and high accuracy in a broad range. This part has also been analyzed in the paper. Besides, it can get better accuracy than the anchor free detectors which does not require anchor tuning (focus, autoassign).

kwwcv commented 2 years ago

I got it, thanks for your kindly reply and also happy mid-autumn festival :)

haotianll commented 2 years ago

@kwwcv Hi, what learning rate and warmup config do you use to get results in Table 1 above?

kwwcv commented 2 years ago

@kwwcv Hi, what learning rate and warmup config do you use to get results in Table 1 above? Hi @haotianliu001, I believe I have changed two settings in aitod_faster_r50_1x.py:

1.
anchor_generator=dict(
type='AnchorGenerator',
scales=[3],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
2. 
optimizer = dict(type='SGD', lr=0.02/4, momentum=0.9, weight_decay=0.0001)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=5000,
warmup_ratio=0.001,
step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12)
evaluation = dict(interval=12, metric='bbox')
# which are kept the same as the corresponding settings in aitod_faster_r50_rfla_kld_1x.py
haotianll commented 2 years ago

I understand. Thank you so much!