Open david101-hunter opened 4 days ago
@qbc2016 please take a look at this issue.
Hello, your model configuration settings for ollama are correct, the following is all you need:
{
"config_name": "my_ollama_chat_config",
"model_type": "ollama_chat",
"model_name": "llama3.2:latest",
"options": {
"temperature": 0.5,
"seed": 123
},
"keep_alive": "5m"
}
The error occurs because the model does not follow the prompt correctly.
During the voting phase, each werewolf's vote for a player is appended after the "vote" field. However, since both werewolves did not vote, the code votes = [extract_name_and_id(wolf(hint).content)[0] for wolf in wolves]
results in votes = ['Abstain', 'Abstain']
. Then, in the majority_vote
function, we have votes_valid = [item for item in votes if item != "Abstain"]
which results in votes_valid
being an empty list ([]
). According to the game rules, this situation should not occur.
It is suggested to modify the prompt to ensure the model votes at this stage, or try using a larger model.
I see that you've modified the majority_vote
function, which should prevent errors from occurring now, but you said that "still not working", could you please provide more details?
this is full log 2024-11-19 15:31:44 | INFO | agentscope.manager._model:load_model_configs:115 - Load configs for model wrapper: my_ollama_chat_config, my_post_api 2024-11-19 15:31:44 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] 2024-11-19 15:31:45 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] 2024-11-19 15:31:45 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] 2024-11-19 15:31:45 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] 2024-11-19 15:31:45 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] 2024-11-19 15:31:45 | INFO | agentscope.models.model:init:203 - Initialize model by configuration [my_ollama_chat_config] Moderator: Player1 and Player2, if you are the only werewolf, eliminate a player. Otherwise, discuss with your teammates and reach an agreement. Player1: ```json {"thought": "I think I should eliminate Player4, they seem suspicious and haven't contributed much to the conversation yet.", "speak": "I suggest we vote for Player4 to be eliminated at night.", "finish_discussion": false}
Player2: ```json
{"thought": "I think we should consider Player4's suspicious behavior, but I'm also worried about Player1's potential involvement as a werewolf. We can't make any assumptions yet.", "speak": "Let's discuss and see if the others agree to vote for Player4 at night.", "finish_discussion": "false"}
Moderator: Which player do you vote to kill? Player1: ```json {"thought": "I think we should consider the possibility that Player4 might be a werewolf, but I also don't want to accuse them without solid evidence. We need more information from the Seer.", "vote": "wait and see"}
2024-11-19 15:32:58 | WARNING | werewolf_utils:extract_name_and_id:58 - vote: invalid name wait and see, set to Abstain
Player2: ```json
{"thought": "I think we should wait and see what the other players suggest before making a decision.", "vote": "wait"}
(I'm choosing to not vote at this point, hoping to gather more information from my fellow villagers.) Moderator: The player with the most votes is Abstain. Moderator: Player6, you're the witch. Tonight Abstain is eliminated. Would you like to resurrect Abstain? 2024-11-19 15:33:19 | WARNING | werewolf_utils:extract_name_and_id:58 - vote: invalid name wait, set to Abstain Player6: ```json {"thought": "I didn't know Abstain well, but I don't want to take any chances with a random player being eliminated. Resurrecting him might give us more information about who else is a werewolf.", "speak": "I'll use my potion of death on Player2 instead", "resurrect": "false"}
Moderator: The witch has chosen to resurrect the player.
Moderator: Player5, you're the seer. Which player in Player1, Player2, Player3, Player4, Player5 and Player6 would you like to check tonight?
Player5: ```
{"thought": "I'd like to know the role of Player3 tonight.", "speak": "Player3"}
Traceback (most recent call last):
File "/media/manhdt4/sda1/agentscope/examples/game_werewolf/werewolf.py", line 149, in json and
MUST be a JSON object.When parsing "```json
This happens because in the MarkdownJsonDictParser, the prompt setting instructs the LLM to generate responses in JSON format as follows: "Respond a JSON dictionary in a markdown's fenced code block as follows: \n```json\n{content_hint}\n```". In the case above, the LLM generated content as "\n```{content_hint}\n```", which caused an error during parsing. You may refer to the content at https://github.com/modelscope/agentscope/blob/main/docs/sphinx_doc/en/source/tutorial/203-parser.md to modify the code of MarkdownJsonDictParser, or consider using a different parser, such as MultiTaggedContentParser.
AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.
Describe the bug I want to run
game_werewolf
example using local OllamaTo Reproduce Steps to reproduce the behavior:
Expected behavior I am trying to find documents about customizing ollama this config in
model_configs.json
original code
But I don't find anything can resolve, please help me edit this config with ollama and run this example.
Environment (please complete the following information):
I have tried to change majority_vote.py like below, but still not working
There are too many things that I don't know, please help me run the example to understand these knowledge. Thanks
http://localhost:5000 is url of agent scope ui