jgiroux8 / T_FFTRadNet

T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals
19 stars 5 forks source link

Recreating RADDet Results #3

Closed IqbalBan closed 1 month ago

IqbalBan commented 1 month ago

Hi. I used to work with the RADDet/Zhang's model, but have transferred over to here. I got this model to work, and have results similar to what you achieved in Table 5. However, I am confused about what you did to get the results in Table 3. What adjustments did you do the your T-FFTRadNet, and what settings did you test the Zhang's model with? Because my results for various IoU thresholds are significantly higher than what Zhang originally reported, yet the value you report is not that much of an increase (55.7% vs 51.6%).

jgiroux8 commented 1 month ago

That is an interesting result. We quoted the value for the RadarResNet (max-pool) backbone with a threshold of 0.5. The settings I used should be located in the associated config files. I also link to the pre-trained models at the bottom of the README which you can use to test.

IqbalBan commented 1 month ago

Thank you for the response. I used the pre-trained model, and found the 55.7% value you reported. Is there a reason the testing produces 55.7% as the mAP near the start of the evaluation (Table 3 Result), but when the script starts separately evaluating each IOU threshold, at 0.5 it produced 64.2% (Table 5 Results)?

jgiroux8 commented 1 month ago

Table 5. does not take into account the classes, rather just whether or not an object is present (binary detection grid). For Table 3. , you are looking at is an object identified, and correctly associated with a class (integrated across classes to give mAP). Let me know if this helps.

IqbalBan commented 1 month ago

That helps a lot, thank you!