Closed cmgzn closed 2 months ago
According to ollama official documentation, there is not much difference between the templates of llama3 and llama3.1 when there is only one message with the role set to "system".
We successfully tested Llama2, Llama3, Qwen:0.5, and Phi when the role was changed from "system" to "user" in #443 .
Regarding the suggestion to split into system message and user message, we need to conduct more testing before making any modifications.
AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.
Describe the bug The docstring for the function OllamaChatWrapper.format states that the role should be "user" in the input messages list. However, the actual implementation sets the role to "system", causing the LLM to only receive system messages and no user messages, and ollama_chat_llama3.1 seems to not respond to system messages in isolation
To Reproduce Steps to reproduce the behavior:
/examples/conversation_basic/conversation.py
add model_config in agent.scope.init:
Start a conversation:
Debugging found that the message returned by LLM was an empty string.
If OllamaChatWrapper.format is modified according to the documentation string, the content can be output normally.
OllamaChatWrapper.format docstring https://github.com/modelscope/agentscope/blob/6c823a955c7be783a2a34c64d13b74aee8c552bf/src/agentscope/models/ollama_model.py#L288-L302
but the actual implementation is: https://github.com/modelscope/agentscope/blob/6c823a955c7be783a2a34c64d13b74aee8c552bf/src/agentscope/models/ollama_model.py#L361-L369
When I use ollama_chat_llama3, this problem does not occur. Perhaps a more general solution can be adopted, such as:
Fix the role in the implementation to be "user" as described in the docstring. (Optional) Update the logic to follow a more common pattern where system messages are in the "system" role and conversation history is included in the "user" role. This might improve clarity and consistency.