Closed Josephrp closed 7 months ago
This would be awesome! please autogen team check this!
maybe by using Ollama integration?
Here is a configuration that is working for me:
config_list = [ { "base_url": "https://api.mistral.ai/v1/", "api_key": "YOUR-MISTRAL-KEY-HERE",
"model":"mistral-medium"
}
]
llm_config = { "config_list": config_list, "seed": 42, "temperature": 0.7, "max_tokens": 8192, }
im getting this error with that config.
i tried a different format for the message and i get this error too
can you show us how you send messages to the agents?
Here's the whole codeblock that runs, and a quick workflow:
1) cd Documents/AutoGen 2) conda activate pyautogen 3) python main.py
Here's the main.py script:
import autogen from autogen import OpenAIWrapper, AssistantAgent, UserProxyAgent, config_list_from_json, GroupChat, GroupChatManager import random
config_list = [ { "base_url": "https://api.mistral.ai/v1/", "api_key": "YOUR KEY HERE",
"model":"mistral-medium"
}
]
llm_config = { "config_list": config_list, "seed": random.randint(1, 1000), "temperature": 0.7, "max_tokens": 8192, }
user_proxy = autogen.UserProxyAgent( name="Overseer", system_message="Overseer at ACME. Speak as your role only, keep your answers short. Guide and oversee the team's discussions and strategies.", code_execution_config=False, )
partner = autogen.AssistantAgent( name="Darlene", llm_config=llm_config, system_message="ACME Partner, Darlene. Speak as your role only, keep your answers short. Use your strategic acumen and leadership to shape the direction of ACME's projects.", )
senior_consultant = autogen.AssistantAgent( name="Jessica", llm_config=llm_config, system_message="ACME Senior Consultant, Jessica. Speak as your role only, keep your answers short. Provide detailed, practical solutions, focusing on analytics and innovation.", )
consultant = autogen.AssistantAgent( name="Roger", llm_config=llm_config, system_message="ACME Consultant, Roger. Speak as your role only, keep your answers short. Turn plans into executable code efficiently and pragmatically.", )
coder = autogen.AssistantAgent( name="Maxwell", llm_config=llm_config, system_message="ACME Coder, Maxwell. Speak as your role only, keep your answers short. Use your programming skills to create innovative and effective coding solutions.", )
executor = autogen.UserProxyAgent( name="Executor", system_message="Executor at ACME. Speak as your role only, keep your answers short. Execute code and report results, bridging ideas and practical implementation.", human_input_mode="NEVER", code_execution_config={"last_n_messages": 1, "work_dir": "paper", "use_docker":False}, )
groupchat = autogen.GroupChat(agents=[user_proxy, partner, senior_consultant, consultant, coder, executor], messages=[], max_round=50, speaker_selection_method="round_robin") manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
user_proxy.initiate_chat( manager, message="""Develop a creative way to calculate pi.""", )
@victordibia
I'm still struggling to get the 'speaker_selection_method="auto"' to work with Mistral. It seems like Mistral models are not generating parsable outputs when they are prompted to select the next speaker.
I've tried Mistral Small, Mistral Medium, and a EXL2 quant of Mixtral-8x7b.
I've also tried changing the 'system_message' to 'description' according to the Blog post for v0.2.2. Still getting errors.
@spincrisis I tried your code, and i have the following error, have you got the similar error? thank you!
File "/opt/homebrew/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1193, in generate_reply final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) File "/opt/homebrew/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 374, in run_chat reply = speaker.generate_reply(sender=self) File "/opt/homebrew/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1193, in generate_reply final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) File "/opt/homebrew/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 708, in generate_oai_reply response = client.create( File "/opt/homebrew/lib/python3.10/site-packages/autogen/oai/client.py", line 278, in create response = this._completions_create(client, params) File "/opt/homebrew/lib/python3.10/site-packages/autogen/oai/client.py", line 543, in _completions_create response = completions.create(**params) File "/opt/homebrew/lib/python3.10/site-packages/openai/_utils/_utils.py", line 272, in wrapper return func(*args, **kwargs) File "/opt/homebrew/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 645, in create return this._post( File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 1088, in post return cast(ResponseT, this.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 853, in request return this._request( File "/opt/homebrew/lib/python3.10/site-packages/openai/_base_client.py", line 930, in _request raise this._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'message': 'Unauthorized', 'request_id': '62d14801625ac53f5e34bb352346c56d'}"
@cozypet I just upgraded to AutoGen 0.2.6, and now my Mistral configuration fails with the following error.
config_list = [ { 'model': 'mistral-medium', 'base_url': 'https://api.mistral.ai/v1/', 'api_key': 'YOUR-API-HERE', } ]
Relevant part of the error I'm getting is:
openai.AuthenticationError: Error code: 401 - {'message': 'Unauthorized', 'request_id': 'c0ec4c5077b4c93f4d8af075b3cfc265'}
If I make any additional progress, I will update here, but for the time being I'm not going to be pursuing the solution to this error.
@spincrisis if you use openai.OpenAI
to create the client with the "base_url" and "api_key", and make a request using client.create()
, is there an error?
This following code works for me (main branch):
from autogen import UserProxyAgent, AssistantAgent, config_list_from_json, Cache
# Load the configuration
config_list = config_list_from_json("OAI_CONFIG_LIST", filter_dict={"model": "mistral-medium"})
# Create a user agent
user = UserProxyAgent("user proxy")
# Create an assistant agent
assistant = AssistantAgent(
"assistant",
system_message="You are a friendly AI assistant.",
llm_config={"config_list": config_list},
)
with Cache.disk() as cache:
# Start the conversation
user.initiate_chat(assistant, cache=cache)
My config list is:
...
{
"model": "mistral-medium",
"api_key": "YOUR_API_KEY",
"base_url": "https://api.mistral.ai/v1/"
},
...
@spincrisis can you check your API keys.
@ekzhu I've updated my OAI_CONFIG_LIST json as per your instructions, but I was unable to run your code as written. I get the error that "Cache is not part of the autogen library". I've updated to autogen 0.2.7 to be sure.
I was able to get the following code to work as long as my selection method is set to "round_robin", but if the speaker_selection_method="auto" then I get an OpenAI api error.
Here's the code I'm running that works:
import autogen
from autogen import UserProxyAgent, AssistantAgent, config_list_from_json
# Load the configuration
config_list = config_list_from_json("OAI_CONFIG_LIST", filter_dict={"model": "mistral-medium"})
llm_config = {
"config_list": config_list
}
# Create a user agent
user = UserProxyAgent("user proxy")
# Create an assistant agent
assistant = AssistantAgent(
"assistant",
system_message="You are a friendly AI assistant.",
llm_config={"config_list": config_list},
)
groupchat = autogen.GroupChat(agents=[user, assistant], messages=[], max_round=50, speaker_selection_method="round_robin")
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
user.initiate_chat(
manager,
message="""Develop a creative way to calculate pi that isn't a Monte Carlo simulation.""",
)
Here's the error I get when I set the speaker_selection_method="auto":
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Expected last role to be user but got system', 'type': 'internal_error_proxy', 'param': None, 'code': '1000'}
https://github.com/BerriAI/litellm/issues/1662 @spincrisis im also facing same issue.have you solved the isse?
@spincrisis Cache is now available in 0.2.8.
Here's the error I get when I set the speaker_selection_method="auto": openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Expected last role to be user but got system', 'type': 'internal_error_proxy', 'param': None, 'code': '1000'}
This seems to be an issue with using Mistral for group chat manager. I am trying to dig into this in the coming days.
same issue when using group manager with 3rd party api like mistral, fireworks. only ollama works for me.
Interesting, I also had this issue sometime ago running local llm with a group chat manager, with OpenAI endpoint hosted with LM Studio.
Also issues with some messaging errors that occur intermittently, causing the agent conversations to cease. I suspect those conversations are breaking due to the limitations of smaller and less capable LLMs, struggling to follow the messaging rules. Perhaps some exceptions needed to get handled in the AutoGen lib. Perhaps that's been addressed now (haven't tested in a few months).
Appreciate your looking into it @ekzhu 👍
I will be looking into it this coming week and have an explanation for this.
@ekzhu Here's the error I get when I set the speaker_selection_method="auto":
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Expected last role to be user but got system', 'type': 'internal_error_proxy', 'param': None, 'code': '1000'}
It is curious, but I solved this issue moving the user
to the last position of the array/list.
groupchat = autogen.GroupChat(agents=[assistant, user]
Honestly, I am facing some issues working with the Mistral API. I have the feeling that they are avoiding AI agent applications.
I tried moving the user to the last position but still same error
openai allows system message to be the last message for the inference. GroupChat
leverages that in line 371 of groupchat.py.
Possible solution: allow GroupChat.speaker_selection_method
to be a callable and invoke it in _prepare_and_select_agents
I tried moving the user to the last position but still same error
@YangQiuEric can you confirm this happens when you test it on mistral endpoint directly?
@Liques @sonichi perhaps the easiest way is to give an option to use “user” role instead of “system” role? Would that work? If we provide too much flexibility right away we might not get to know how users use our API as it will be a lot of custom code and they won’t come to ask us question :D
mistral function calling capable model is now released, a good time to revist this item.
@Josephrp Which one?
mistral large i believe was released on the 27th of february : https://chat.mistral.ai/chat hope this helps !
On Tue, Mar 12, 2024 at 12:05 AM Martin Honermeyer @.***> wrote:
@Josephrp https://github.com/Josephrp Which one?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/autogen/issues/991#issuecomment-1989611011, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEK6QQGHSTYTLZW4GNYYEODYXY2D7AVCNFSM6AAAAABAWUUSX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBZGYYTCMBRGE . You are receiving this because you were mentioned.Message ID: @.***>
Ah, I thought you were referring to an open-source model.
我发现了一个解决方案,这个接口可以把所有llm修正为openai的接口形式,可以通过这个接口对接MistralAI:https://openrouter.ai/
我发现了一个解决方案,这个接口可以把所有llm修正为openai的接口形式,可以通过这个接口对接MistralAI:https://openrouter.ai/
我同意这个“连接器”解决方案可以很容易地工作,但我建议专门为了 autogen 的目的而制作它,以减少一些依赖关系,比如 mistral api 请求函数的包装器。 通常有很多工作要做,最近的 autogen 文档也是如此,这很好:-)
i agree that this "connector" solution can easily work , but i would suggest to make it specifically for the purposes of autogen to reduce some dependencies , something like a wrapper on the mistral api request function. normally there's a lot of work , also in the recent documentation for autogen, which is nice :-)
MistralAI API should be supported now. https://microsoft.github.io/autogen/docs/topics/non-openai-models/cloud-mistralai
i tip my hat to you & team , with all my thanks 👏🏻
Is your feature request related to a problem? Please describe.
The Mistral AI API has just been released for pubilc beta , and will surely be a favorite among Mistral supporters like us ! It's a great way to access mixtral without the headache of finding 94GB of ram accross multiple computing nodes.
Check out the documentation here : https://docs.mistral.ai/
Describe the solution you'd like
The ideal solution would work seamlessly with no breaking changes to any part of the code, modelled along the lines of @BeibinLi gemini implementation. Hopefully i'll get around to it this christmas :-)
Additional context
No response