Open clownrat6 opened 7 months ago
I tried to finetune llava-v1.6-mistral-7b with mistral_instruct
template, but the output was not in the expected format. Have you figured out what template llava-v1.6-mistral-7b uses?
I tried to finetune llava-v1.6-mistral-7b with
mistral_instruct
template, but the output was not in the expected format. Have you figured out what template llava-v1.6-mistral-7b uses?
Did you solve it? what version did you use in pretraining and finetuning btw?
I tried to finetune llava-v1.6-mistral-7b with
mistral_instruct
template, but the output was not in the expected format. Have you figured out what template llava-v1.6-mistral-7b uses?Did you solve it? what version did you use in pretraining and finetuning btw?
The codebase does not support llava 1.6 training and I didn't solve it, but I'm going to work on this in the coming days. I use the latest code in finetuning llava-v1.6-mistral-7b
I think llava-v1.6-mistral-7b model uses llava_llama_2 conversation template. You can try it out!
Description
I write a inference script like this:
If I conduct this command
python inference.py mistral_instruct
, this code will generate empty output. If I conduct this commandpython inference.py llava_v1
, this code will generate normal output: