guinmoon / LLMFarm

llama and other large language models on iOS and MacOS offline using GGML library.
https://llmfarm.site
MIT License
1.05k stars 62 forks source link

Unable to support the qwen model well #63

Open sonica1987 opened 1 month ago

sonica1987 commented 1 month ago

IMG_5772

IMG_5771 IMG_5770

I tried qwen 0.5b, 1.8b, 4b, and 7b, and only 0.5b worked properly. However, the result was not as good as the official demo. The output included the label "<| im_start |>" in the prompt, which I believe will affect the final inference result. I hope to improve it to better support the qwen series model. Thank you

sonica1987 commented 1 month ago

This may be due to the model I downloaded from HF using an old version of llama.cpp conversion, which seems to have been fixed in the new version

https://github.com/ggerganov/llama.cpp/issues/4331

guinmoon commented 1 month ago

Can you tell me if reconverting solved your problem?

sonica1987 commented 1 month ago

Can you tell me if reconverting solved your problem?

Hello, I have re exported the gguf model using llama.cpp and the issue has not been resolved. Here is the prompt template I tried:

<|im_start|>system {{You are a helpful assistant.}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant

The following is the interaction mode of llama.cpp, using: ./main -m ./models/gguf/qwen_0.5b_chat-Q4_K_M.gguf -n 128 -i --chatml

== Running in interactive mode. ==

<|endoftext|><|im_start|>system <|im_end|> <|im_start|>user

Hello Hello! How can I assist you today?<|im_end|>

你好 你好!有什么我可以帮助您的吗?<|im_end|>

提取 您好!我需要帮助提取什么信息?<|im_end|>

晚安 晚安,愿您有一个美好的夜晚!<|im_end|>

返回 好的,我将返回您的信息。<|im_end|>

in llama.cpp The converted model works very well

guinmoon commented 1 month ago

try this template

[system](<|im_start|>system
You are a helpful assistant.<|im_end|>)
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
sonica1987 commented 1 month ago

try this template

[system](<|im_start|>system
You are a helpful assistant.<|im_end|>)
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant

same problem Not working When<| im_start |>is not used in the template, the inference result will still have<| im_start |>

It is easy to reproduce the problem when using irregular short prompt for questioning

guinmoon commented 1 month ago

I think it's all in the --chatml key. In ./main llama.cpp there is a special check for such tokens as <|im_start|> when specifying the --chatml key. Apparently, the template does not cure it. I will try to add a similar option in the new version of llmfarm.

guinmoon commented 1 month ago

It looks like the <|im_start|> token does not appear with this prompt format. Try this. The first and last lines are empty.


<|im_start|>user
{{prompt}}<|im_end|>

<|im_start|>assistant
sonica1987 commented 1 month ago

看起来<|im_start|>令牌没有以这种提示格式出现。试试这个。第一行和最后一行是空的。


<|im_start|>user
{{prompt}}<|im_end|>

<|im_start|>assistant

I am using the app in testlight for testing, but there is still an issue. I am currently unable to compile this project, and I apologize for not being able to assist

guinmoon commented 1 month ago

also try add <|im_start|> to reverse prompt

sonica1987 commented 1 month ago

<|im_start|>

IMG_5779 IMG_5778 ☹️

yangtuo250 commented 2 weeks ago

Qwen1.5.json

IMG_79C40CFA90F6-1 IMG_70BE81B01BCC-1