Closed luca-git closed 7 months ago
This doesn't seem due to a change in AutoGen. Does 0.2.3 still work?
Yes, I have 2 conda environments one with AG 0.2.3 one with 0.2.17 the first one works the last doesn’t. In both I serve via litellm 1.30.6 (1.30.7 is the newest and has issues of its own). Autogen studio has the same problem. It's wothr noticing that I have similar but not identical issues when serving Mistral medium. Works fine on 0.2.3 doesn't on 0.2.17
It's most likely me not being able to properly udpate the teachable agents in chainlit.
It's most likely me not being able to properly udpate the teachable agents in chainlit.
What happens if you don't add teachability to the agent? Does a failure still occur?
I made a more neutral test right now as the issue happens in AG studio as well, using the stock "general agent workflow" and the "Plot a chart of NVDA and TESLA.." example I only changed the api to use litellm and claude 3 opus. The issue is definitely there. But can't say if it's on litellm side ot AG.
The error is different though:
File "C:\Users\finbi\anaconda3\envs\autogen_studio\Lib\site-packages\openai\_base_client.py", line 980, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'AnthropicException - AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages: all messages must have non-empty content except for the optional final assistant message"}}', 'type': None, 'param': None, 'code': 400}}```
another example, I do believe techability plays a role in the issue, and multiple agent chat might as well EDIT: (removing teachability won't solve the issue so my assumption was wrong). Nonetheless, just used the teachability example and still have issues with cluade on litellm
from autogen import UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.capabilities.teachability import Teachability
from autogen import ConversableAgent # As an example
# Load LLM inference endpoints from an env variable or a file
# See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
#
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST_ant")#, filter_dict=filter_dict)
llm_config={"config_list": config_list, "timeout": 120}
# Start by instantiating any agent that inherits from ConversableAgent, which we use directly here for simplicity.
teachable_agent = ConversableAgent(
name="teachable_agent", # The name can be anything.
llm_config=llm_config
)
# Instantiate a Teachability object. Its parameters are all optional.
teachability = Teachability(
reset_db=False, # Use True to force-reset the memo DB, and False to use an existing DB.
path_to_db_dir="./tmp/interactive/teachability_db" # Can be any path, but teachable agents in a group chat require unique paths.
)
# Now add teachability to the agent.
teachability.add_to_agent(teachable_agent)
# For this test, create a user proxy agent as usual.
user = UserProxyAgent("user", human_input_mode="ALWAYS")
# This function will return once the user types 'exit'.
teachable_agent.initiate_chat(user, message="Hi, I'm a teachable user assistant! What's on your mind?")
error:
user (to teachable_agent):
hi
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Traceback (most recent call last):
File "c:\Users\finbi\pymaindir\my_tools\gr_teach_autogn\base_teachbl_ag.py", line 34, in <module>
teachable_agent.initiate_chat(user, message="Hi, I'm a teachable user assistant! What's on your mind?")
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 928, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 784, in receive
self.send(reply, sender, silent=silent)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 782, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 1784, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 1181, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\agentchat\conversable_agent.py", line 1200, in _generate_oai_reply_from_client
response = llm_client.create(
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\oai\client.py", line 624, in create
response = client.create(params)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\autogen\oai\client.py", line 278, in create
response = completions.create(**params)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\openai\_utils\_utils.py", line 303, in wrapper
return func(*args, **kwargs)
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\openai\resources\chat\completions.py", line 645, in create
return self._post(
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\openai\_base_client.py", line 1088, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\openai\_base_client.py", line 853, in request
return self._request(
File "C:\Users\finbi\anaconda3\envs\gr_teach_autogn\lib\site-packages\openai\_base_client.py", line 930, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'AnthropicException - AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages: all messages must have non-empty content except for the optional final assistant message"}}', 'type': None, 'param': None, 'code': 400}}
What happens in this example if you comment out the call to add_to_agent
?
I ran the code commenting out 'add to agent' and the 'teachability' same issue. Now that they fixed litellm (a few versions from yesterday) with the same exact config mistral medium works (they had an issue with it) openai obviouly works, claude 3 doesn't.
Something even weirder: on autogen studio Claude 3 works on the default generic team and nvidia example. Great. If I use my own 3 agent team plus chat manager, nvidia example, it will throw the error I opened the issue here with. My team works with openai. With mistral same problem just a different error. So it's something subtle.
I ran the code commenting out 'add to agent' and the 'teachability' same issue. Now that they fixed litellm (a few versions from yesterday) with the same exact config mistral medium works (they had an issue with it) openai obviouly works, claude 3 doesn't.
If the issue persists after commenting out add_to_agent
, then the issue is unrelated to autogen's Teachability class.
@olgavrou @BeibinLi @ekzhu @marklysze FYI We're having a discussion on Discord about alternative models right now.
With the latest litellm update the issue is not there anymore.
Describe the bug
When using Litellm server and AG version 0.2.17 (this did not happen in 0.2.3) to serve claude-3-opus-20240229 I get:
Full error:
Steps to reproduce
run Autogen via anaconda, serving via Litellm
Model Used
claude-3-opus-20240229
Expected Behavior
Normal chat as in 0.2.3
Screenshots and logs
No response
Additional Information
No response