Chasel-Tsui / mmdet-rfla

ECCV22: RFLA
MIT License
254 stars 23 forks source link

What is the difference between the pos samples obtained in the first stage and the second stage in Hierarchical Label Assignment? #18

Open Icecream-blue-sky opened 1 year ago

Icecream-blue-sky commented 1 year ago

In Hierarchical Label Assignment, the only difference between the first stage and the second stage is the effective radius. Since both stages use top-k RFD score (i.e., is a kind of relative distance) to do label assignment, the assignment results of two stages should be the same, is that so ? So why do we need the second stage ? Besides, in Fig.4, only in gt scale [40,48], the average number of pos samples is larger than 3, why ? image

Chasel-Tsui commented 1 year ago

The multi-step assigning is a heuristic that tested effective on the AI-TOD dataset. Although the topk can assign a uniform number of positive for each gt. For some gts which share the same positive samples with other gts, they will lose some positive samples when assigning, The multi-step operation is to compensate some positive samples for them.

Icecream-blue-sky commented 1 year ago

The multi-step assigning is a heuristic that tested effective on the AI-TOD dataset. Although the topk can assign a uniform number of positive for each gt. For some gts which share the same positive samples with other gts, they will lose some positive samples when assigning, The multi-step operation is to compensate some positive samples for them.

I still can't understand... For a certain gt, the assigned feature points with topk RFD score in the first stage and second stage should be the same (since the rank of feature points won't change though the radius has decayed). So the multi-step operation can't compensate any positive samples for the gts mentioned above. I don't know where I am going wrong...

Icecream-blue-sky commented 1 year ago

The multi-step assigning is a heuristic that tested effective on the AI-TOD dataset. Although the topk can assign a uniform number of positive for each gt. For some gts which share the same positive samples with other gts, they will lose some positive samples when assigning, The multi-step operation is to compensate some positive samples for them.

Could you give an example of the case you mentioned (For some gts which share the same positive samples with other gts, they will lose some positive samples when assigning, The multi-step operation is to compensate some positive samples for them.) ? I have tested the code and I find that the assignment results with and without the second stage of HLA are the same for all training images. I think the reason is that slightly reducing the radius won't change the topk feature points for each certain gt. Or Could you provide the code you generate Fig.4 ? I'm not doubting your results. I think it's a good paper, but I'm really curious why only in gt scale [40,48], the average number of pos samples is larger than 3. Thanks!

Chasel-Tsui commented 1 year ago

Yes, It has been a long time since the last time I use the code for generating Fig.4. The code is really a mess... I will organize it and share it with you soon.

Icecream-blue-sky commented 1 year ago

Yes, It has been a long time since the last time I use the code for generating Fig.4. The code is really a mess... I will organize it and share it with you soon.

Thanks!

Chasel-Tsui commented 1 year ago

Hi, I have sent the code to the e-mail address shown on your GitHub homepage, please see the attached file~

Icecream-blue-sky commented 1 year ago

Hi, I have sent the code to the e-mail address shown on your GitHub homepage, please see the attached file~

Got it, thank you!