Open yviansu opened 4 days ago
When running chat_sample.exe using Llama-2-7b-chat-hf, seems that it always have a built-in question:
chat_sample.exe
Llama-2-7b-chat-hf
optimum-cli export openvino --model "meta-llama/Llama-2-7b-chat-hf" --trust-remote-code "meta-llama/Llama-2-7b-chat-hf"
chat_sample.exe C:\models\Llama-2-7b-chat-hf
By the way, I also tested chat_sample.exe with TinyLlama-1.1B-Chat-v1.0, and it works well:
TinyLlama-1.1B-Chat-v1.0
Error detail
When running![image](https://github.com/openvinotoolkit/openvino.genai/assets/103162767/93e8d38f-e209-46bc-b4f9-08b143780ba6)
chat_sample.exe
usingLlama-2-7b-chat-hf
, seems that it always have a built-in question:Steps to reproduce
Llama-2-7b-chat-hf
models by this command:By the way, I also tested![image](https://github.com/openvinotoolkit/openvino.genai/assets/103162767/74cb3e71-7bf4-43bd-9154-1881ade13af2)
chat_sample.exe
withTinyLlama-1.1B-Chat-v1.0
, and it works well: