microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
34.06k stars 4.92k forks source link

NameError: name 'cerebras_AuthenticationError' is not defined when using OpenAI as LLM provider #3975

Closed dspencej closed 2 weeks ago

dspencej commented 2 weeks ago

What happened?

I'm encountering an issue with the autogen library (version 0.3.1) when using OpenAI as the LLM provider (version 1.52.2). The error occurs during the generation of responses with the ConversableAgent. It appears that the library is referencing a Cerebras-specific exception even though I'm not using Cerebras.

Undefined Exception Handling:

When handling the TypeError, the autogen library references cerebras_AuthenticationError, which is not defined or imported anywhere in the environment, leading to a NameError.

ERROR:llm.llm_agent:[UNEXPECTED EXCEPTION] Error during agent response from DMAgent: name 'cerebras_AuthenticationError' is not defined ERROR:llm.llm_agent:Traceback (most recent call last): File ".../autogen/oai/client.py", line 831, in create response = client.create(params) File ".../autogen/oai/client.py", line 400, in create response = completions.create(*params) File ".../openai/_utils/_utils.py", line 274, in wrapper return func(args, **kwargs) TypeError: Completions.create() got an unexpected keyword argument 'params'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File ".../llm/llm_agent.py", line 182, in get_agent_response response = agent.generate_reply(messages=prompt) File ".../autogen/agentchat/conversable_agent.py", line 2056, in generate_reply final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) File ".../autogen/agentchat/conversable_agent.py", line 1424, in generate_oai_reply extracted_response = self._generate_oai_reply_from_client( File ".../autogen/agentchat/conversable_agent.py", line 1443, in _generate_oai_reply_from_client response = llm_client.create( File ".../autogen/oai/client.py", line 877, in create cerebras_AuthenticationError, NameError: name 'cerebras_AuthenticationError' is not defined

What did you expect to happen?

Expected Behavior:

The ConversableAgent should generate a response using the OpenAI LLM without any errors.

How can we reproduce it (as minimally and precisely as possible)?

The error seems to originate from autogen/oai/client.py at line 877, where cerebras_AuthenticationError is referenced but not defined. This occurs despite setting LLM_PROVIDER to "openai" and configuring the agent to use OpenAI's API. The initial TypeError suggests that Completions.create() is being called with an unexpected keyword argument 'params', indicating a possible incompatibility between the versions of the autogen and openai libraries.

pip install autogen openai

llm/llm_config.py

import os

LLM_PROVIDER = "openai"

OPENAI_MODEL = "gpt-4" # or 'gpt-3.5-turbo' OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

CONFIG_LIST = [ { "model": OPENAI_MODEL, "api_key": OPENAI_API_KEY, "n": 1, "max_tokens": 2048, "params": { "temperature": 1.0, "top_p": 1.0, }, } ]

llm_config = { "config_list": CONFIG_LIST, "timeout": 1000, }

llm/agents.py

from autogen import ConversableAgent

dm_agent = ConversableAgent( name="DMAgent", system_message="You are the Dungeon Master...", llm_config=llm_config, human_input_mode="NEVER", code_execution_config=False, )

llm/llm_agent.py

dm_response = dm_agent.generate_reply(messages=prompt)

AutoGen version

0.3.1

Which package was this bug in

Core

Model used

gpt-4

Python version

3.11.8

Operating system

Windows 10

Any additional info you think would be helpful for fixing this bug

No response

abhigyan-b commented 1 week ago

@dspencej How did you resolve this. I am facing the same issue

RubensZimbres commented 1 week ago

Same here, with Ollama gemma:2b running on GPU, it's a timeout issue

dspencej commented 6 days ago

Hey, sorry I can't remember exactly what was causing this issue. Here are the configurations that I am using for Ollama and OpenAI. I have not had any issues with these.

config = {
    "config_list": [
        {
            "model": "llama3:latest",  # Default model for Ollama
            "base_url": "http://localhost:11434/v1",
            "api_key": "ollama",
            "price": [0, 0],
        }
    ],
    "timeout": 1000,
}
return config
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
    raise EnvironmentError("OPENAI_API_KEY not set in environment variables.")

config = {
    "config_list": [
        {
            "model": "gpt-4",  # You can set this to a default model
            "api_key": api_key,
            "api_type": "openai",
            "base_url": "https://api.openai.com/v1",
            "n": 1,
            "max_tokens": 2048,
            "temperature": 0.7,
            "top_p": 0.9,
        }
    ],
    "timeout": 1000,
}

Here is how I am defining my agent:

    dm_agent = ConversableAgent(
        name="DMAgent",
        system_message=(
            "You are a helpful assistant."
        ),
        llm_config=llm_config,
        human_input_mode="NEVER",
        code_execution_config=False,
    )

I believe it was related to an issue with how I was creating the msg before sending to the generate reply function.

dm_prompt_content = "String prompt to agent."
msg = [{"content": dm_prompt_content, "role": "user"}]
response = dm_agent.generate_reply(messages=msg)

Sorry I couldn't remember more specifically. Good luck!