danfenghong / IEEE_TPAMI_SpectralGPT

Hong, D., Zhang, B., Li, X., Li, Y., Li, C., Yao, J., Yokoya, N., Li, H., Ghamisi, P., Jia, X., Plaza, A. and Gamba, P., Benediktsson, J., Chanussot, J. (2024). SpectralGPT: Spectral remote sensing foundation model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. DOI:10.1109/TPAMI.2024.3362475.
192 stars 21 forks source link

For --model mae_vit_base_patch8_128, KeyError #7

Open robmarkcole opened 7 months ago

robmarkcole commented 7 months ago

Running eurosat_finetune, from the error:

    model = models_vit_tensor.__dict__[args.model](drop_path_rate=args.drop_path,
KeyError: 'mae_vit_base_patch8_128'

Adding print(list(models_vit_tensor.__dict__.keys()) I see:

['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__file__', '__cached__', '__builtins__', 'partial', 'torch', 'nn', 

'Attention', 'Block', 'PatchEmbed', 'Linear_Block', 'Linear_Attention', 'VisionTransformer', 'vit_huge_patch14', 'vit_base_patch16', 

'vit_base_patch8', 'vit_base_patch8_128', 'vit_base_patch8_channel10', 'vit_base_patch16_128', 'vit_large_patch16', 

'vit_large_patch8_128', 'vit_huge_patch8_128', 'vit_base_patch8_120']

Possibly missing from the script: import models_mae_spectral and at line 273: model = models_mae_spectral.__dict__[args.model]() However I then get AttributeError: 'MaskedAutoencoderViT' object has no attribute 'head'

moonboy12138 commented 7 months ago

Thank you for your kind reminder. Please use the right command --model vit_base_patch8_128 during finetuning. We will correct this as soon as possible.

robmarkcole commented 7 months ago

I then get

[11:43:49.174991] Load pre-trained checkpoint from: /teamspace/studios/this_studio/ieee_tpami_spectralgpt/weights/SpectralGPT+.pth
Traceback (most recent call last):
  File "/teamspace/studios/this_studio/ieee_tpami_spectralgpt/main_finetune.py", line 455, in <module>
    main(args)
  File "/teamspace/studios/this_studio/ieee_tpami_spectralgpt/main_finetune.py", line 293, in main
    if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:
KeyError: 'pos_embed_spatial'
moonboy12138 commented 7 months ago

You can simply modify lines 291-292 by deleting 'pos_embed_spatial' in main_finetune.py to run the code. If the same error persists, try applying the same method again.

robmarkcole commented 7 months ago

OK after removing 'pos_embed_spatial', 'pos_embed_temporal' I can proceed

Ahuiforever commented 4 months ago

OK after removing 'pos_embed_spatial', 'pos_embed_temporal' I can proceed

I suggest to modify the source code in the following way in case of other potential problems.

from

if k in checkpoint_model and checkpoint_model[k].shape != state_dict[k].shape:

to

if k in checkpoint_model and k in state_dict and checkpoint_model[k].shape != state_dict[k].shape: