Closed LoFiApostasy closed 7 months ago
I started using this one and really like it, https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/README.md If you decide to add it I really like the code they have as it auto detects hardware and splits inference between gpus automatically, but youll need to use the switches in a specific order (odd but whatever.)
xtuner chat /mnt/d/models/xtuner-llava-llama-3-8b-v1_1 --visual-encoder openai/clip-vit-large-patch14-336 --llava /mnt/d/models/xtuner-llava-llama-3-8b-v1_1 --prompt-template llama3_chat --image /mnt/d/Images/1.jpg
This has been added in v1.23.0.
I had to use the version that is compatible with Transformers, so it does not support those switches you mentioned.
You are awesome.
I started using this one and really like it, https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/README.md If you decide to add it I really like the code they have as it auto detects hardware and splits inference between gpus automatically, but youll need to use the switches in a specific order (odd but whatever.)