Dmmm1997 / FSRA

「TCSVT」A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization
86 stars 19 forks source link

Logit scale #8

Open Sautenich opened 8 months ago

Sautenich commented 8 months ago

Thank for your amazing work! Could you please suggest where is the error? Should I install another torch?

~/fsra$ bash train_test_local.sh This is not an error. If you want to use low precision, i.e., fp16, please install the apex with cuda support (https://github.com/NVIDIA/apex) and update pytorch to 1.0 [Resize(size=(256, 256), interpolation=bicubic, max_size=None, antialias=True), Pad(padding=0, fill=0, padding_mode=edge), RandomCrop(size=(256, 256), padding=None), RandomHorizontalFlip(p=0.5), ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])] ===========building transformer=========== using Transformer_type: vit_small_patch16_224_FSRA as a backbone ===========================ERROR========================= Traceback (most recent call last): File "/home/orangepi/Desktop/fsra/models/FSRA/backbones/vit_pytorch.py", line 400, in load_param self.statedict()[k].copy(v) KeyError: 'logit_scale'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/orangepi/Desktop/fsra/train.py", line 271, in model = make_model(opt) File "/home/orangepi/Desktop/fsra/models/model.py", line 62, in make_model model = two_view_net(opt, opt.nclasses, block=opt.block,return_f=opt.triplet_loss) File "/home/orangepi/Desktop/fsra/models/model.py", line 7, in init self.model_1 = make_transformer_model(opt, num_class=class_num, block=block,return_f=return_f) File "/home/orangepi/Desktop/fsra/models/FSRA/make_model.py", line 203, in make_transformer_model model = build_transformer(opt, num_class,block=block,return_f=return_f) File "/home/orangepi/Desktop/fsra/models/FSRA/make_model.py", line 102, in init self.transformer.load_param(model_path) File "/home/orangepi/Desktop/fsra/models/FSRA/backbones/vit_pytorch.py", line 403, in load_param print('shape do not match in k :{}: param_dict{} vs self.state_dict(){}'.format(k, v.shape, self.state_dict()[k].shape)) KeyError: 'logit_scale'