Closed katopz closed 7 months ago
See #605
Let's close this in favor of #605.
python generate.py \
--model llava-hf/llava-v1.6-mistral-7b-hf \
--image "/Users/omkarpatil/Downloads/maxresdefault.jpg " \
--prompt "USER: <image>\nWhat is this image about ?\nASSISTANT:" \
--max-tokens 128 \
--temp 0
ValueError: Model type mistral not supported. Currently only 'llama' is supported
I don't think that the issue is resolved. I am using May 8 commit[fad959837232b0b4deeaa29b147dc69cbc9c5d19].
I don't think we ever added it. But you should checkout out MLX VLM by @Blaizzy : https://github.com/Blaizzy/mlx-vlm
Thanks for the shoutout @awni!
@katopz we are extending support for all vlms,
Currently we support llava, idefics 2, nanollava with many more coming such as QwenVL, bunny, internVL, including llava-next which that model is based on.
If you are interested you can send us a PR, I will be there to help if needed :)
Currently we are not support
llava-hf/llava-v1.6-mistral-7b-hf
It would be nice if we support this one.