zhu327 / gemini-openai-proxy

A proxy for converting the OpenAI API protocol to the Google Gemini Pro protocol.
MIT License
554 stars 101 forks source link

autogen getting 400 from proxy #10

Closed inestyne closed 9 months ago

inestyne commented 9 months ago

Chat completions work fine from PostMan so it's working perfectly.

Autogen is getting a 400 error.

from autogen import AssistantAgent, UserProxyAgent

llm_config_gemini = {
    "config_list": [
        {
            "api_key": "***"
            "base_url": "http://localhost:8080/v1",
        }
    ]
}

assistant = AssistantAgent("assistant", llm_config_gemini)
user_proxy = UserProxyAgent("user_proxy", human_input_mode="TERMINATE", code_execution_config={"work_dir": "coding", "use_docker": False})

user_proxy.initiate_chat(assistant, message="Plot a chart of top performingi  blue chip stock price change YTD use dark mode")

Error:

Traceback (most recent call last):
  File "c:\Dev\ai\autogen\gemini\main..py", line 21, in <module>
    user_proxy.initiate_chat(assistant, message="Plot a chart of top performingi  blue chip stock price change YTD use dark mode")
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 550, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 348, in send
    recipient.receive(message, self, request_reply, silent)
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 481, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 906, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 625, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\oai\client.py", line 247, in create
    response = self._completions_create(client, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\autogen\oai\client.py", line 327, in _completions_create
    response = completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\openai\_utils\_utils.py", line 272, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\openai\resources\chat\completions.py", line 645, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\openai\_base_client.py", line 1088, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\openai\_base_client.py", line 853, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\david\miniconda3\envs\pytorch\Lib\site-packages\openai\_base_client.py", line 930, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[0].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
inestyne commented 9 months ago

Sorry this was bad code, it works just fine, here's the proper autogen test code:

from autogen import AssistantAgent, UserProxyAgent

llm_config_gemini = {
    "config_list": [
        {
            "model": "gpt-4",
            "api_key": "***"
            "base_url": "http://localhost:8080/v1",
        }
    ]
}

assistant = AssistantAgent("assistant", llm_config=llm_config_gemini)
user_proxy = UserProxyAgent("user_proxy", human_input_mode="TERMINATE", code_execution_config={"work_dir": "coding", "use_docker": False})

user_proxy.initiate_chat(assistant, message="Plot a chart of top performingi  blue chip stock price change YTD use dark mode")