danfenghong / IEEE_TPAMI_SpectralGPT

Hong, D., Zhang, B., Li, X., Li, Y., Li, C., Yao, J., Yokoya, N., Li, H., Ghamisi, P., Jia, X., Plaza, A. and Gamba, P., Benediktsson, J., Chanussot, J. (2024). SpectralGPT: Spectral remote sensing foundation model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. DOI:10.1109/TPAMI.2024.3362475.
161 stars 17 forks source link

SegMunich dataset #2

Closed andremakar closed 4 months ago

andremakar commented 5 months ago

Hello, which mean and std did you use to experiment with SegMunich data? this dictionary is initialized on line 14 in the file TUM_128.py

mean_std_dict = { 'SegMunich': ['SegMunich', ...

moonboy12138 commented 5 months ago

Hello, which mean and std did you use to experiment with SegMunich data? this dictionary is initialized on line 14 in the file TUM_128.py

mean_std_dict = { 'SegMunich': ['SegMunich', ...

The mean and std in the mean_std_dict dictionary are replicated from the SatMAE pretrain normalization. However, within the SegMunich dataset, we do not employ this normalization method. Instead, we compute the mean and standard deviation for each image individually. Further details are outlined in lines 96-99 of the TUM_128.py file.

andremakar commented 5 months ago

Hello, which mean and std did you use to experiment with SegMunich data? this dictionary is initialized on line 14 in the file TUM_128.py mean_std_dict = { 'SegMunich': ['SegMunich', ...

The mean and std in the mean_std_dict dictionary are replicated from the SatMAE pretrain normalization. However, within the SegMunich dataset, we do not employ this normalization method. Instead, we compute the mean and standard deviation for each image individually. Further details are outlined in lines 96-99 of the TUM_128.py file.

Thanks for the previous reply!

Could you please add a more detailed description of how to properly run your networks for the SegMunich dataset for semantic segmentation of HSI? And also, if possible, tell us how to run your code correctly for other custom datasets for semantic segmentation. Our dataset consists of cubes from 200 to 250 channels, and the image size in height and width exceeds 128 pixels.

moonboy12138 commented 4 months ago

Apologies for the delayed response. For Hyperspectral Image (HSI) classification, please tailor the function according to the spectral channels required, as demonstrated in lines 355-369 of models_vit_tensor.py here. Additionally, the process for loading our model is detailed in lines 281-303 of main_finetune.py here.

We conducted our pretraining and downstream tasks using Sentinel-2 data, and the model should be compatible with data cubes ranging from 200 to 250 channels. A critical consideration is the availability of sufficient GPU memory for fine-tuning, which is a significant issue we are actively investigating.