Closed ragesh2000 closed 8 months ago
@ragesh2000 what version of AutoGen are you using?
pyautogen==0.2.1 is the version iam using @afourney
+1 here. 0.2.2 pyautogen
@ragesh2000 @mongolu Any code that I could replicate the error?
In file autogen/oai/client.py
, I've replace line 365 with this
"completion_tokens": usage_summary.get(response.model, {}).get("completion_tokens", 0) if usage_summary.get(response.model, {}).get("completion_tokens", 0) is not None else 0
I'm not confident i cant help with the error replication.
I can confirm though that I bumped into this error multiple times with multiple local LLM.
I am running autogen in ollama docker container, with litellm
.
If I can help more, please guide me to give you what info you require.
I can also provide my code
import autogen
config_list_llama = [-
{
'base_url': "http://0.0.0.0:8000",
'api_key': "NULL"
}
]
llm_config_llama = {
"config_list": config_list_llama,
}
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
human_input_mode="NEVER",
)
analyser = autogen.AssistantAgent(
name="Data analyser",
llm_config=llm_config_llama,
)
critic = autogen.AssistantAgent(
name="Critic",
llm_config=llm_config_llama,
)
groupchat = autogen.GroupChat(agents=[user_proxy, analyser, critic], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config_llama)
msg = """Download data from /data/inference_generation_sheet.csv and give me some valuable inferences.
"""
res = user_proxy.initiate_chat(manager, message=msg)
Iam also using autogen using ollama models and with litellm @kevin666aa
Thanks! @ragesh2000 @mongolu Can you checkout #1008 and maybe copy paste the code to see if it works? I cannot replicate these errors quickly since I am not using local models.
Yes that solves my issue @kevin666aa
Also I have noticed that if the message to user_proxy.initiate_chat is same, the agent is not caring about the message and just returning the previous response. I think some kind of cache is getting involved in the response generation. How can i solve this issue ?
Delete .cache dir
Yes that solves my issue @kevin666aa
I also confirm that the problem is solved with this.
Thanks! Will get it merged!
Thanks @kevin666aa
Describe the bug
Got an error in autogen groupchat when i set human_input_mode == 'NEVER'
Steps to reproduce
No response
Expected Behavior
No response
Screenshots and logs
No response
Additional Information
No response