dengweihuan / SSDGL

SSDGL: A Spectral-Spatial-Dependent Global Learning Framework for Insufficient and Imbalanced Hyperspectral Image Classification (TCYB2021) https://ieeexplore.ieee.org/document/9440852
GNU General Public License v3.0
53 stars 15 forks source link

About the accuracy of the contrast method. #1

Open Hewq77 opened 3 years ago

Hewq77 commented 3 years ago

Thank you very much for the code you shared. I have a question about the accuracy of the comparison method in your paper. In table ⅵ of the paper, FPGA can obtain 89.89% of each class of 10 samples (150 training samples in total) in The University of Houston, while more training samples (2832 in total) are used in FPGA. But less OA (86.61) is obtained. Why is this?

dengweihuan commented 3 years ago

This is a good question. In the original paper, FPGA uses the regions of interest (ROIs) to train the model, i.e. training with 2832 samples. Each ROI contains a large number of pixels, but the spectral-spatial information in ROI is redundant, that is, a few pixels can actually represent the information of the entire ROI. Therefore, our method is to randomly select several pixels rather than ROIs for model training. In this paper, the original training ROIs and test ROIs are merged into a whole data set (the data can be found in README), our training samples are selected in this new dataset. Although our training pixels are less, these pixels may contain more spectral-spatial information than original training ROIs. The classification accuracy with 10 samples per class may be higher than that of classification with ROIs (2832 in total). The key to the problem is the information redundancy of ROIs.

发件人:BananaJMI @.> 发送日期:2021-10-05 21:52:20 收件人:dengweihuan/SSDGL @.> 抄送人:Subscribed @.***> 主题:[dengweihuan/SSDGL] About the accuracy of the contrast method. (#1)

Thank you very much for the code you shared. I have a question about the accuracy of the comparison method in your paper. In table ⅵ of the paper, FPGA can obtain 89.89% of each class of 10 samples (150 training samples in total) in The University of Houston, while more training samples (2832 in total) are used in FPGA. But less OA (86.61) is obtained. Why is this? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

Hewq77 commented 2 years ago

OK, thanks.