Event-AHU / OpenPAR

[OpenPAR] An open-source framework for Pedestrian Attribute Recognition, based on PyTorch
MIT License
63 stars 6 forks source link

issue on promptPAR training #11

Closed jerryum closed 6 months ago

jerryum commented 6 months ago

when I tried to train the promptPAR model, I got an error that there is no pretrained model in the path. see the code below.

base_block.py. can't load the pretrained model. How can I by pass this?

class TransformerClassifier(nn.Module): def init(self, clip_model, attr_num, attributes, dim=768, pretrain_path='/data/jinjiandong/jx_vit_base_p16_224-80ecf9dd.pth'): super().init() super().init() self.attr_num = attr_num self.word_embed = nn.Linear(clip_model.visual.output_dim, dim) vit = vit_base() vit.load_param(pretrain_path)

FileNotFoundError Traceback (most recent call last) File ~/works/OpenPAR/PromptPAR/train.py:190 188 parser = argument_parser() 189 args = parser.parse_args() --> 190 main(args)

File ~/works/OpenPAR/PromptPAR/train.py:78, in main(args) 75 labels = train_set.label 76 sample_weight = labels.mean(0) ---> 78 model = TransformerClassifier(clip_model, train_set.attr_num, train_set.attributes) 79 if torch.cuda.is_available(): 80 model = model.cuda()

File ~/works/OpenPAR/PromptPAR/models/base_block.py:17, in TransformerClassifier.init(self, clip_model, attr_num, attributes, dim, pretrain_path) 15 self.norm = nn.LayerNorm(dim) 16 vit = vit_base() ---> 17 vit.load_param(pretrain_path) 18 self.weightlayer = nn.ModuleList([nn.Linear(dim, 1) for in range(self.attr_num)]) 19 self.dim = dim

File ~/works/OpenPAR/PromptPAR/models/vit.py:296, in ViT.load_param(self, model_path) 294 def load_param(self, model_path): 295 print ('model path', model_path) --> 296 param_dict = torch.load(model_path, map_location='cpu') ... File ~/works/OpenPAR/venv/lib/python3.10/site-packages/torch/serialization.py:426, in _open_file.init(self, name, mode) 425 def init(self, name, mode): --> 426 super().init(open(name, mode))

1125178969 commented 6 months ago

You should download the vit-b model pre-trained on imagenet at https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth and then modify pretrain_path

jerryum commented 6 months ago

thank you!