microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
30.73k stars 4.48k forks source link

[Bug]: TypeError: unsupported operand type(s) for +: 'int' and 'NoneType' #984

Closed ragesh2000 closed 8 months ago

ragesh2000 commented 9 months ago

Describe the bug

Got an error in autogen groupchat when i set human_input_mode == 'NEVER'

 File "/home/gpu/ai/llm/autogen/autogen_inference.py", line 47, in <module>
    res = user_proxy.initiate_chat(manager, message=msg)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 550, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 348, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 481, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 940, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 291, in run_chat
    speaker = groupchat.select_speaker(speaker, self)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 168, in select_speaker
    final, name = selector.generate_oai_reply(
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 625, in generate_oai_reply
    response = client.create(
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/oai/client.py", line 262, in create
    self._update_usage_summary(response, use_cache=False)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/oai/client.py", line 362, in _update_usage_summary
    self.total_usage_summary = update_usage(self.total_usage_summary)
  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/autogen/oai/client.py", line 355, in update_usage
    "completion_tokens": usage_summary.get(response.model, {}).get("completion_tokens", 0)
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

Steps to reproduce

No response

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

No response

### Tasks
afourney commented 9 months ago

@ragesh2000 what version of AutoGen are you using?

ragesh2000 commented 9 months ago

pyautogen==0.2.1 is the version iam using @afourney

mongolu commented 9 months ago

+1 here. 0.2.2 pyautogen

yiranwu0 commented 9 months ago

@ragesh2000 @mongolu Any code that I could replicate the error?

mongolu commented 9 months ago

In file autogen/oai/client.py, I've replace line 365 with this "completion_tokens": usage_summary.get(response.model, {}).get("completion_tokens", 0) if usage_summary.get(response.model, {}).get("completion_tokens", 0) is not None else 0

I'm not confident i cant help with the error replication. I can confirm though that I bumped into this error multiple times with multiple local LLM. I am running autogen in ollama docker container, with litellm.

If I can help more, please guide me to give you what info you require.

ragesh2000 commented 9 months ago

I can also provide my code

import autogen

config_list_llama = [-
    {
        'base_url': "http://0.0.0.0:8000",
        'api_key': "NULL"
    }
]

llm_config_llama = {
    "config_list": config_list_llama,
}

user_proxy = autogen.UserProxyAgent(
   name="User_proxy",
   system_message="A human admin.",
   human_input_mode="NEVER",
)

analyser = autogen.AssistantAgent(
    name="Data analyser",
    llm_config=llm_config_llama,
)
critic = autogen.AssistantAgent(
    name="Critic",
    llm_config=llm_config_llama,
)

groupchat = autogen.GroupChat(agents=[user_proxy, analyser, critic], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config_llama)
msg = """Download data from /data/inference_generation_sheet.csv and give me some valuable inferences.
      """
res = user_proxy.initiate_chat(manager, message=msg)

Iam also using autogen using ollama models and with litellm @kevin666aa

yiranwu0 commented 8 months ago

Thanks! @ragesh2000 @mongolu Can you checkout #1008 and maybe copy paste the code to see if it works? I cannot replicate these errors quickly since I am not using local models.

ragesh2000 commented 8 months ago

Yes that solves my issue @kevin666aa

ragesh2000 commented 8 months ago

Also I have noticed that if the message to user_proxy.initiate_chat is same, the agent is not caring about the message and just returning the previous response. I think some kind of cache is getting involved in the response generation. How can i solve this issue ?

mongolu commented 8 months ago

Delete .cache dir

mongolu commented 8 months ago

Yes that solves my issue @kevin666aa

I also confirm that the problem is solved with this.

yiranwu0 commented 8 months ago

Thanks! Will get it merged!

ragesh2000 commented 8 months ago

Thanks @kevin666aa