Closed gordonhu608 closed 2 months ago
It seems like the paper reported scores using LLaMa-2. Whereas in the released training code, we are guided to use vicuna-1.5 which is the same as LLaVA. Can we assume that vicuna-1.5 training can work smoothly use the current code?
It seems like the paper reported scores using LLaMa-2. Whereas in the released training code, we are guided to use vicuna-1.5 which is the same as LLaVA. Can we assume that vicuna-1.5 training can work smoothly use the current code?