Closed ChuanhongLi closed 9 months ago
Hello, thank you for expressing interest in our work! While Llama-2-7b-hf has not been instruction tuning and isn't ideal for chatbot applications, we recommend you consider instruction-tuned models like Vicuna or Llama-2-7b-chat-hf for that purpose.
Hello, thank you for expressing interest in our work! While Llama-2-7b-hf has not been instruction tuning and isn't ideal for chatbot applications, we recommend you consider instruction-tuned models like Vicuna or Llama-2-7b-chat-hf for that purpose.
Thank you for your reply. One more question, the figure 10 in your paper also uses instruction-tuned Llama-2-7b(Llama-2-13b)?
Figure 10 is about efficiency results. Using instruction-tuned models (*-chat) and base models has identical results.
First of all, thanks for releasing the excellent work! I have some questions running the example you provided. I use the command:
And I get the following results:
It seems that it does not work well! Anything wrong with my test? Should I change some things to get right results?
And when using the lmsys/vicuna-13b-v1.3 as the model, the results seems ok.
Thanks!