Closed Subarasheese closed 1 year ago
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.
I have the same issue, does anyone know whether there is a fix for this?
Describe the bug
Hello,
I downloaded this model: https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ
Then I am trying to use the 'multimodal' extension, and load it using the AutoAWQ loader.
I am using this command to try to load it:
python3 server.py --model TheBloke_llava-v1.5-13B-AWQ --multimodal-pipeline llava-llama-2-13b --loader AutoAWQ
Chat works fine, however, when I upload an image, it fails. See the logs.
Is there an existing issue for this?
Reproduction
python3 server.py --model TheBloke_llava-v1.5-13B-AWQ --multimodal-pipeline llava-llama-2-13b --loader AutoAWQ
Screenshot
No response
Logs
System Info