Open findalexli opened 1 year ago
Hi, the deepspeed inference script was committed accidentally when I was debugging it. Please use model_vqa
for now.
Unfortunately, I do not find using deepspeed inference help with the speed so still investigating. If you have any insights regarding this, please kindly share. Thanks!
When did you clone our code?
I cloned the code base after 5/1/23
Describe the issue
Issue:
I am trying to run some inference using the provided llama-2 13-chat fine-tuned model which I downloaded from huggingface and placed in a checkpoints folder. I ran into some other issue using the older model_vqa script (the image process is NULL) , so switched to this script
Command:
Log: