PKU-YuanGroup / MoE-LLaVA

Mixture-of-Experts for Large Vision-Language Models
https://arxiv.org/abs/2401.15947
Apache License 2.0
1.94k stars 123 forks source link

Inference without Deepspeed #40

Open aaronnat23 opened 8 months ago

aaronnat23 commented 8 months ago

Describe the issue

Describe the bug I am having issues infering the model after installing deepspeed for windows. my configuration is mentioned below, is there anyway we can perform inference with just a python script. i tried to directly call python moellava/serve/cli.py but it looks like in the code still needs deepspeed. Also have no issues when i perform inference for base LLAVA model

To Reproduce

deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "other/test2.jpg"

when i run the above i get this error:

'deepspeed' is not recognized as an internal or external command,

even when i set the path to deepspeed lib from conda env, i still get this error

System info

aaronnat23 commented 8 months ago

when i run the run_llava.py i get this error

python -m moellava.eval.run_llava --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "other/test2.jpg" --query "whats in this image"

image