Open signine opened 1 month ago
Sorry, we merged a PR yesterday and it was problematic. We just rolled back. Could you pull and try again?
also using torch 2.0.1+cu118 and flash attention 2.4.2 and got this error:
Setting pad_token_id
to eos_token_id
:128001 for open-end generation.
Traceback (most recent call last):
File "/MarineAI/Nvidia-VILA/VILA/llava/eval/run_vila.py", line 154, in
Are you using llama3? If so, you need to pass --conv-mode=llama_3
sorry, I did not pay attention to this parameter... works now. Thanks a lot
@Efficient-Large-Language-Model Pulling the latest code worked for me. Thank you!
I get the following error while running
llava/eval/run_vila.py
on a H100 gpu:torch version is
2.0.1+cu118
and flash attention2.4.2