Describe the bug
I am having issues infering the model after installing deepspeed for windows. my configuration is mentioned below, is there anyway we can perform inference with just a python script. i tried to directly call python moellava/serve/cli.py but it looks like in the code still needs deepspeed. Also have no issues when i perform inference for base LLAVA model
Describe the issue
Describe the bug I am having issues infering the model after installing deepspeed for windows. my configuration is mentioned below, is there anyway we can perform inference with just a python script. i tried to directly call python moellava/serve/cli.py but it looks like in the code still needs deepspeed. Also have no issues when i perform inference for base LLAVA model
To Reproduce
deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "other/test2.jpg"
when i run the above i get this error:
'deepspeed' is not recognized as an internal or external command,
even when i set the path to deepspeed lib from conda env, i still get this error
System info