tsb0601 / MMVP

257 stars 7 forks source link

下载模型的是否需要放在具有特殊名字的文件夹里(evaluate时) #2

Closed Pro-flynn closed 5 months ago

Pro-flynn commented 6 months ago

从huggingface(https://huggingface.co/MMVP/MoF_Models/tree/main)下载的模型MoF_Models 是否需要放在名字中带有“llava”的文件夹中,即evaluate_mllm.py时输入的model-path目录是否可以是任意名字(看起来这个关系到如何load模型https://github.com/tsb0601/MMVP/blob/main/LLaVA/llava/model/builder.py#L26)

Pro-flynn commented 6 months ago

@tsb0601

tsb0601 commented 6 months ago

Hi,

Yes, you can download the model and load it from any path containing the name "llava"

Sincerely

Peter

Pro-flynn commented 6 months ago

I download the pretrain weight from the huggingface(https://huggingface.co/MMVP/MoF_Models) and put those pretrain model weight in the directory named "llava_pretrain_model", then when I run the evaluate code(https://github.com/tsb0601/MMVP/blob/main/scripts/evaluate_mllm.py) , I meeting some warning in loading those pretrain model , those warning hints that model and the pretrain weight do not match, Is this kind of scene normal?. Warning as following: Some weights of the model checkpoint at llava_pretrain_model were not used when initializing LlavaLlamaForCausalLM: ['model.dino_tower.clip_vision_tower.vision_model.encoder.layers.6.mlp.fc1.bias', 'model.dino_tower.vision_tower.blocks.5.ls2.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.17.mlp.fc2.weight', 'model.dino_tower.vision_tower.blocks.0.attn.proj.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.11.layer_norm2.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.11.layer_norm1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.11.mlp.fc2.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.13.layer_norm2.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.9.mlp.fc1.weight', 'model.vision_tower.vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.17.layer_norm1.bias', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.7.mlp.fc1.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.8.mlp.fc2.bias', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.2.mlp.fc1.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.4.layer_norm2.bias', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.19.mlp.fc2.bias', 'model.dino_tower.vision_tower.blocks.15.attn.proj.weight', 'model.dino_tower.vision_tower.blocks.0.mlp.fc2.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.1.mlp.fc1.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.13.layer_norm2.weight', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.bias', 'model.dino_tower.vision_tower.mask_token', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.bias', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.4.layer_norm1.bias', 'model.vision_tower.vision_tower.vision_model.encoder.layers.3.mlp.fc2.weight', 'model.dino_tower.vision_tower.blocks.8.mlp.fc2.bias', 'model.dino_tower.clip_vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.weight',....

Pro-flynn commented 6 months ago

@tsb0601

tsb0601 commented 6 months ago

Hi, Yes, You can ignore the warnings here. We'll also fix the warning signs in the next version