microsoft / autogen

A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
28.24k stars 4.13k forks source link

[Bug]: Local Models try to connect to OpenAi #1266

Open CHesketh76 opened 5 months ago

CHesketh76 commented 5 months ago

Describe the bug

After setting up Agents and workflow with local endpoints. Getting this error message openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable when running in PlayGround. AutoGen_BUG_01

Steps to reproduce

  1. Install then run Ollama with mistral model Mistral
  2. Install then run litellm and upgrade gunicorn
  3. Endpoint for litellm will be http://0.0.0.0:8000/
  4. Create New agent named mistral and base url to be http://0.0.0.0:8000/ give it any system prompt, then save
  5. Create New Workflow, change the model's base urls to http://0.0.0.0:8000/ for both the Sender and Receiver in the Workflow. Then Save
  6. Go to Playground, create new chat with newly create workflow. Say hi, then error message will appear in app and console screen will display full error log

Expected Behavior

When I try hello there I expect a response, not an error message from the application. AutoGen_BUG_03

Screenshots and logs

Traceback (most recent call last): File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/web/app.py", line 68, in add_message response_message: Message = chatmanager.chat( ^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/chatmanager.py", line 22, in chat flow = AutoGenWorkFlowManager(config=flow_config, history=history, work_dir=scratch_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 35, in init self.sender = self.load(config.sender) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 156, in load agent = self.load_agent_config(agent_spec.config, agent_spec.type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 175, in load_agent_config agent = autogen.UserProxyAgent(agent_config.dict()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/agentchat/user_proxy_agent.py", line 83, in init super().init( File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 126, in init self.client = OpenAIWrapper(self.llm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/oai/client.py", line 105, in init self._clients = [self._client(extra_kwargs, openai_config)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3. 11.6/lib/python3.11/site-packages/autogen/oai/client.py", line 162, in _client client = OpenAI(**openai_config) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/openai/_client.py", line 97, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

AutoGen_BUG_02

Additional Information

AutoGen Version: autogenstudio-0.0.25a0 Operating System: Windows and Linux Python Version: 3.11

ekzhu commented 5 months ago

@victordibia

victordibia commented 5 months ago

Hi, thanks for noting this.

Can you confirm two things

Also ensure that all agents (sender and receiver) are configured with your local model

CHesketh76 commented 5 months ago

I have have used Ollama + liteLLM api, LMStudio api, and Text generation webui api. All of these are openai compliant I have deleted all default models and workflows, the only model that is up is the local one I am hosting.

VikingRaider88 commented 5 months ago

Having exactly the same problem

CHesketh76 commented 5 months ago

I also tried this on window's, Window's + WSL, and a fresh install of Ubuntu and all have the same problem.

theaxxxin commented 5 months ago

It finally worked. I deleted all the agents, skills and workflows first. After that, I directly made a new workflow and then added the details for the user_proxy agent and the assistant with my local LLM's IP address.

victordibia commented 5 months ago

Thanks for the update @theaxxxin . It is possible that on some machines, the full ip address (instead of http://localhost) is needed.

CHesketh76 commented 5 months ago

I do not get this issue when I run autogen in a python, the issues is only present in AutoGen Studio.

maxim-saplin commented 5 months ago

Try setting api key to any value, e.g. this OAI_CONFIG_LIST works fine (while I run local inference server at localhost):

[
    {
        "model": "phi2",
        "api_key": "not-needed",
        "base_url": "http://localhost:1234/v1"
    }
]

image

As soon as you remove the "api_key" you'll get errors

P.S.: as far as remember mistral/mixtral models can have issues with system message and fail when they see one, most agents rely those messages

ankutty commented 5 months ago

Did anyone solve this issue. I am getting the same issue in Mac. Using the same toolsets as above. Tried it with http://0.0.0.0:8000 and http://127.0.0.1:8000

CHesketh76 commented 5 months ago

@ankutty Sort of, so delete all the default workflows and models that you get after first installing. Then manually fill in the model name then add "not-needed" for the api key and the url. Do this for EVERY instance that a model is asked for, including userproxy and primary assistant.

Do not believe that message at the bottom when adjusting userproxy and primary_assistant, it will not use the model agent yous setup.

EDIT: make sure you read the api documentation. When you give the AutoGen your url, http://127.0.0.1:8000/, it will append /chat/completion/ to the end of that url. I am using Text Generation WebUI's API which is http://127.0.0.1:8000/v1``` and not justhttp://127.0.0.1:8000/```

ankutty commented 5 months ago

@CHesketh76 . Thank you for the prompt reply. I am using a mistral local model. Went through the same steps as you said but not working with only mistral model. Its needs an export OPENAI_API_KEY environment variable before you start autogen UI studio. What i fail to understand is if I am not using any OPEN AI model and just mistral model why do we need it ?

CHesketh76 commented 5 months ago

@ankutty They must have it hard coded into AutoGen Studio for some reason. Try running AutoGen in a juyter notebook without the ui. See if it works there.

EDIT: that could help diagnose if your LLM api is compatible.

EDIT2: running it in a notebook will also help you know if the code executor works. This is another issue I ran into with the Studio.

ankutty commented 5 months ago

You were right @CHesketh76 . I just passed a random value for the environment value - export OPENAI_API_KEY= and it worked.

@victordibia - Should that be a bug as in passing of the OPENAI_API_KEY shouldnt be mandatory

CHesketh76 commented 5 months ago

Autogen Studio 2 makes this issue worse. Local model profiles and setup are overwrite in favor of the openai api.

junkei-okinawa commented 5 months ago

@victordibia We have created a PR with a bug fix for this issue. Please check it. https://github.com/microsoft/autogen/pull/1428

rob10e commented 5 months ago

@ankutty Sort of, so delete all the default workflows and models that you get after first installing. Then manually fill in the model name then add "not-needed" for the api key and the url. Do this for EVERY instance that a model is asked for, including userproxy and primary assistant.

Do not believe that message at the bottom when adjusting userproxy and primary_assistant, it will not use the model agent yous setup.

EDIT: make sure you read the api documentation. When you give the AutoGen your url, http://127.0.0.1:8000/, it will append /chat/completion/ to the end of that url. I am using Text Generation WebUI's API which is http://127.0.0.1:8000/v1``` and not justhttp://127.0.0.1:8000/```

This is what I needed for LM Studio.... http://localhost:1234/v1

federicodoing commented 2 months ago

Thanks for the update @theaxxxin . It is possible that on some machines, the full ip address (instead of http://localhost) is needed.

On my machine worked the opposite, meaning that http://0.0.0.0:4000 was not working while http://localhost:4000 did work. Always with "not-needed" as api key. Testing with llama3 through ollama.

mdkberry commented 2 months ago

still happening with Ollama LLM have stripped everytthing back and the model tests fine in Autogen Studio but runs with the error:

_ValueError: api_key is not present in llm_config or OPENAI_API_KEY env variable for agent adcopy_critic_agent. Update your workflow to provide an apikey to use the LLM.

I stripped everything away and rebuilt. same problem.

but all the agents, workflows, and models are using the Ollama LLM. It tests fine with model settings:

model name: mistral API KEY: NA base URL: http://localhost:11434/v1

but the error remains the moment I use the playground. me sad.

menguzat commented 1 month ago

I'm having the exact same error. went through the exact same process.

federicodoing commented 1 month ago

API KEY: NA

have you tried this?

not-needed

mdkberry commented 1 month ago

API KEY: NA

have you tried this?

tried every configuration of everything, nothing worked. I gave up and went back to CrewAI.