Open Nastu-Ho opened 2 months ago
No description provided.
Maybe you can contribute to this part. All you need to do is add a llava_qwen2.py, the corresponding conv_mode, and a preprocess_qwen2 function in the train.py file to handle the corresponding mask.
I have tried using qwen2-7B-instruct as llm, but found that the fine-tuned results were not as expected. It may be that the preprocess was not done well. Now trying to find a public solution
No description provided.
Maybe you can contribute to this part. All you need to do is add a llava_qwen2.py, the corresponding conv_mode, and a preprocess_qwen2 function in the train.py file to handle the corresponding mask.
I have tried using qwen2-7B-instruct as llm, but found that the fine-tuned results were not as expected. It may be that the preprocess was not done well. Now trying to find a public solution
I have conducted the experiments on Qwen2-0.5B and everything is fine.
Maybe you can contribute to this part. All you need to do is add a llava_qwen2.py, the corresponding conv_mode, and a preprocess_qwen2 function in the train.py file to handle the corresponding mask.