Closed gaowei724 closed 5 months ago
Hi, If you haven't use Deespeed for training, then the model saved at pretrained_epoch100 should only contains the lora weights and projector weights, check if that's the case. In this situation, the demo would load from MODELS/pllava-7b (Which wasn't compatible with the original non-lora PllavaModel's from_pretrained method) and the language model would be initialized. Next it loads the weights in pretrained_epoch100, which also doesn't have the language models' weights.
In this case, I think you should set pretrained_model_name_or_path to llava-hf/llava-v1.6-vicuna-7b-hf if you are only doing lora training and projector training.
Hi, If you haven't use Deespeed for training, then the model saved at pretrained_epoch100 should only contains the lora weights and projector weights, check if that's the case. In this situation, the demo would load from MODELS/pllava-7b (Which wasn't compatible with the original non-lora PllavaModel's from_pretrained method) and the language model would be initialized. Next it loads the weights in pretrained_epoch100, which also doesn't have the language models' weights.
In this case, I think you should set pretrained_model_name_or_path to llava-hf/llava-v1.6-vicuna-7b-hf if you are only doing lora training and projector training.
Thank you. Regarding your first question, I indeed did not use DeepSpeed; therefore, the pretrained_epoch100 part only saved lora_language and projector. The reason is because my model training settings was(and without deepspeed):
"model": {
"repo_id": "xxx/gaowei/Code/PLLaVA/MODELS/llava-v1.6-vicuna-7b-hf",
"pretrained_path": "xxx/Code/PLLaVA/MODELS/pllava-7b",
"load_from_origin": false,
"origin_vision": "",
"origin_llm": "",
"vision_encoder": {
"name": "vit_l14"
},
"torch_dtype": "bfloat16",
"freeze_projector": false,
"freeze_lm": false,
"freeze_vision_tower": true,
"lora_target_modules": [
"q_proj",
"v_proj"
],
"use_lora": true,
"lora_r": 128,
"lora_alpha": 32,
"lora_dropout": 0.05,
"num_frames": 16,
"pooling_method": "avg",
"use_pooling": true,
"frame_shape": [
24,
24
],
"pooling_shape": [
16,
12,
12
]
},
When trainning the model, I actually first loaded MODELS/llava-v1.6-vicuna-7b-hf and then loaded MODELS/pllava-7b, so my first attempt at using the inference command was (try to load lm and vm's prams from pllava-7b):
python tasks/eval/mvbench/pllava_eval_mvbench.py --pretrained_model_name_or_path MODELS/pllava-7b \
--save_path test_results/test_pllava_7b/mvbench --use_lora --lora_alpha 32 --num_frames 16 \
--weight_dir pretrained_epoch100 --conv_mode eval_mvbench
That is, I first loaded MODELS/pllava-7b and then loaded my trained pretrained_epoch100. Thanks to your hint, I re-examined the output log and found a message in the log stating:
Some weights of PllavaForConditionalGeneration were not initialized from the model checkpoint at MODELS/pllava-7b and are newly initialized: ['language_model.lm_head.weight' .....
This should be as you mentioned, "pllava-7b wasn't compatible with the original non-lora PllavaModel's from_pretrained method", so some of the lm parameters were re-initialized.
Following your guidance, I changed the evaluation command to:
python tasks/eval/mvbench/pllava_eval_mvbench.py --pretrained_model_name_or_path MODELS/llava-v1.6-vicuna-7b-hf \
--save_path test_results/test_pllava_7b/mvbench --use_lora --lora_alpha 32 --num_frames 16 \
--weight_dir pretrained_epoch100 --conv_mode eval_mvbench
This time, the output text was no longer empty, so I think your guess was correct.
However, this command actually loads all the base_model exclusively from MODELS/llava-v1.6-vicuna-7b-hf, and the model I wanted to test is based on pllava-7b for fine-tuning, which is not consistent with my training process; I did not load any pllava-7b parameters. Could you tell me if MODELS/llava-v1.6-vicuna-7b-hf and pllava-7b, aside from the lora and projector parts, have exactly the same lm and vm? If so, skipping the loading of pllava-7b is reasonable.
Yep the base weights in llava-1.6 is the same as pllava. We did not train the Base parts of Language Model and Vision Models.
Yep the base weights in llava-1.6 is the same as pllava. We did not train the Base parts of Language Model and Vision Models.
I do full fintuning on all layers of the network, including the vision encoder, projection layer, and language model. I don't use deepspeed. In the checkpoint_dir, I get .pt files as follows: How can I use the my after-training weight ?
Yep the base weights in llava-1.6 is the same as pllava. We did not train the Base parts of Language Model and Vision Models.
Thx.
Hello, I have encountered a problem similar to issue 43 I used the author-provided pllava-7b as the pre-trained model on my own prepared video Q&A dataset to continue finetuning (during training, I activated the projector and lm but froze the vm, using the default lora configuration for training). Throughout the training process, the video-loss consistently decreased. After training with the original code, I found that I would get three types of folders, such as: ckpt_epoch100, pretrained_epoch100, and the folder pretrained_step100. Reading the code, I suspect that ./pretrained_epoch100 contains the saved projector and the large model's lora parameters. I executed the following command for evaluation on MVBench:
However, I found that the accuracy is always 0, as the output text is consistently empty. After some debugging and checking, I verified my lora configuration (the same as during training, lora_alpha=32), and I suspect that my lora training has collapsed, so I tried setting the lora_alpha to 0, but that was of no use. I'm confused and don't know if my training has crashed or if I've forgotten some critical hyperparameters or misunderstood the evaluation process. Can you provide me with some clues?
My training configuration is as follows:
My inference evaluation command is as follows:
The results of the evaluation are as follows:
The trainning tensorboard log: