Open yumianhuli1 opened 10 months ago
Did you happen to fix this? I am actually having the same issue.
Did you happen to fix this? I am actually having the same issue.
no
I think if you put the word "llava" in your model checkpoint name it should work, I tried debugging it and that was the only solution that worked for me.
Hello!@LinB203 I used the ‘Inference for video’ code from readme,but got
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.17s/it] Some weights of the model checkpoint at model were not used when initializing LlavaLlamaForCausalLM: ['model.image_tower.image_tower.encoder.layers.15.self_attn.k_proj.weight', 'model.video_tower.video_tower.encoder.layers.15.mlp.fc1.weight', 'model.image_tower.image_tower.encoder.layers.17.layer_norm1.bias', 'model.image_tower.image_tower.encoder.layers.7.layer_norm2.weight', 'model.video_tower.video_tower.encoder.layers.22.self_attn.q_proj.weight', 'model.video_tower.video_tower.encoder.layers.10.self_attn.k_proj.weight', 'model.image_tower.image_tower.encoder.layers.11.self_attn.k_proj.weight', 'model.video_tower.video_tower.encoder.layers.9.self_attn.q_proj.weight', 'model.video_tower.video_tower.encoder.layers.11.temporal_attn.v_proj.bias', 。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。 'model.video_tower.video_tower.encoder.layers.23.mlp.fc1.bias', 'model.image_tower.image_tower.encoder.layers.13.layer_norm1.weight', 'model.video_tower.video_tower.encoder.layers.11.layer_norm1.bias']
It seems to have read the model correctly, But when exec tokenizer, model, processor, _ = load_pretrained_model(model_path, None, model_name, load_8bit, When load_4bit,device=device, cache_dir=cache_dir), processor returns None