Open gn64 opened 3 weeks ago
It's definitely possible to support them.
We will include them asap. But again be aware that for bigger models (34b, 72b) we might not be able to actually run/test it.
Hi, Just a continuation of the above question, is it possible to finetune llava-next-qwen-32b? If so, when can I expect it to be supported with this repo? Or if you could point me in the direction of what changes need to be made, I can do it. Thanks
@shamanthak-hegde Do you mean llava next video qwen 34b? If so I imagine its almost the same as 7b, where you can simply add the model identifier in supported_models.py and that should be it. Everything else should already be implemented together with the 7b model.
Got it. That works. Thank you, I appreciate the quick reply
Thank you for your excellent work. I believe llava-1.6 currently supports 7b/13b models, but do you have any plans to expand this to larger models (such as llava-hf/llava-v1.6-34b-hf, llava-hf/llama3-llava-next-8b-hf, or llava-hf/llava-next-72b-hf)?