modelscope / agentscope

Start building LLM-empowered multi-agent applications in an easier way.
https://doc.agentscope.io/
Apache License 2.0
5.34k stars 328 forks source link

[HOTFIX] Fix the error in format function by adding system message #472

Closed DavdGao closed 1 month ago

DavdGao commented 1 month ago

Description

I found the current format strategy leads to misunderstanding.

For example, the following formatted message will be inserted a system prompt automatically by the API provider, and the LLM will refuse to act as the new role, e.g. "Friday"

prompt = [
    {"role": "user", "content": "You're a helpful assistant named Friday\\n#Conversation History\\nuser:Hello!"}
]

With the above prompt, Qwen-max responses "I'm Qwen, not Friday."

Solution

We check if there is a system prompt, and put it at the beginning of the formatted prompt.

prompt = [
    {"role": "system", "content": "You're a helpful assistant named Friday"},
    {"role": "user", "content": "#Conversation History\\nuser:Hello!"}
]

Checklist

Please check the following items before code is ready to be reviewed.

DavdGao commented 1 month ago

LGTM, but I am not sure if all LLM services support setting system prompts (i.e., providing {"role": "system", "content": "xxx"} ).

The involved model APIs in this PR include Ollama, DashScope, LiteLLM, Yi and Zhipu. Currently, we are sure Ollama, DashScope, Yi and Zhipu support system prompt. While LiteLLM will handle the system prompt within its library (If not supported, it will convert system message into user message)