hexiao0275 / S2ADet

[TGRS 2023] The official repo for the paper "Object Detection in Hyperspectral Image via Unified Spectral-Spatial Feature Aggregation".
GNU Affero General Public License v3.0
27 stars 6 forks source link

The Issue of Evaluation Metrics in Research #9

Closed Titos123 closed 6 months ago

Titos123 commented 6 months ago

Hi,

I have a question about the evaluation metric mAP used in the author's paper.

First, I would like to clarify whether the mAP mentioned in the paper refers to mAP50 or mAP50~95.

Second, I implemented the default code in the paper, and the results on the HOD3K and HOD-1 datasets are both lower than those in the paper. I would like to ask what might be the reason for this. HOD3K_Result HOD_1_Result

hexiao0275 commented 6 months ago

Hi 1.mAP represents mAP50 on the HOD3K dataset, and on the HOD-1 dataset, we follow the criteria of the original paper 《Object Detection in Hyperspectral Image》 (The evaluation standard of Pascal VOC [14] was used with the IoU threshold value set to 0.5). 2.You could try adjusting the learning rate and initialization parameters, there are quite a few people who have tested similar paper results, which could be caused by machine inconsistencies. 3.You can compare the results obtained on your machine with the methods you have improved, we don't mind (lol).

Titos123 commented 6 months ago

Thank you very much for your answer. I would like to ask if there is any label file for the original HOD3K data. I used the labels from the processed dataset you provided as the labels for the original HOD3K data, but the mAP I obtained from training is almost 0.000. I wonder if the labels for the 16-band data can match those of the processed data.

hexiao0275 commented 6 months ago

The results can be run by following the Readme.md process, each band of the hyperspectral data is highly spatially aligned, so the preprocessing has no effect on the labeled files, which are all consistent.