Closed CHesketh76 closed 1 month ago
@victordibia
Hi, thanks for noting this.
Can you confirm two things
Also ensure that all agents (sender and receiver) are configured with your local model
I have have used Ollama + liteLLM api, LMStudio api, and Text generation webui api. All of these are openai compliant I have deleted all default models and workflows, the only model that is up is the local one I am hosting.
Having exactly the same problem
I also tried this on window's, Window's + WSL, and a fresh install of Ubuntu and all have the same problem.
It finally worked. I deleted all the agents, skills and workflows first. After that, I directly made a new workflow and then added the details for the user_proxy agent and the assistant with my local LLM's IP address.
Thanks for the update @theaxxxin . It is possible that on some machines, the full ip address (instead of http://localhost) is needed.
I do not get this issue when I run autogen in a python, the issues is only present in AutoGen Studio.
Try setting api key to any value, e.g. this OAI_CONFIG_LIST works fine (while I run local inference server at localhost):
[
{
"model": "phi2",
"api_key": "not-needed",
"base_url": "http://localhost:1234/v1"
}
]
As soon as you remove the "api_key" you'll get errors
P.S.: as far as remember mistral/mixtral models can have issues with system message and fail when they see one, most agents rely those messages
Did anyone solve this issue. I am getting the same issue in Mac. Using the same toolsets as above. Tried it with http://0.0.0.0:8000 and http://127.0.0.1:8000
@ankutty Sort of, so delete all the default workflows and models that you get after first installing. Then manually fill in the model name then add "not-needed" for the api key and the url. Do this for EVERY instance that a model is asked for, including userproxy and primary assistant.
Do not believe that message at the bottom when adjusting userproxy and primary_assistant, it will not use the model agent yous setup.
EDIT: make sure you read the api documentation. When you give the AutoGen your url, http://127.0.0.1:8000/
, it will append /chat/completion/
to the end of that url. I am using Text Generation WebUI's API which is http://127.0.0.1:8000/v1``` and not just
http://127.0.0.1:8000/```
@CHesketh76 . Thank you for the prompt reply. I am using a mistral local model. Went through the same steps as you said but not working with only mistral model. Its needs an export OPENAI_API_KEY environment variable before you start autogen UI studio. What i fail to understand is if I am not using any OPEN AI model and just mistral model why do we need it ?
@ankutty They must have it hard coded into AutoGen Studio for some reason. Try running AutoGen in a juyter notebook without the ui. See if it works there.
EDIT: that could help diagnose if your LLM api is compatible.
EDIT2: running it in a notebook will also help you know if the code executor works. This is another issue I ran into with the Studio.
You were right @CHesketh76 . I just passed a random value for the environment value - export OPENAI_API_KEY= and it worked.
@victordibia - Should that be a bug as in passing of the OPENAI_API_KEY shouldnt be mandatory
Autogen Studio 2 makes this issue worse. Local model profiles and setup are overwrite in favor of the openai api.
@victordibia We have created a PR with a bug fix for this issue. Please check it. https://github.com/microsoft/autogen/pull/1428
@ankutty Sort of, so delete all the default workflows and models that you get after first installing. Then manually fill in the model name then add "not-needed" for the api key and the url. Do this for EVERY instance that a model is asked for, including userproxy and primary assistant.
Do not believe that message at the bottom when adjusting userproxy and primary_assistant, it will not use the model agent yous setup.
EDIT: make sure you read the api documentation. When you give the AutoGen your url,
http://127.0.0.1:8000/
, it will append/chat/completion/
to the end of that url. I am using Text Generation WebUI's API which ishttp://127.0.0.1:8000/v1``` and not just
http://127.0.0.1:8000/```
This is what I needed for LM Studio.... http://localhost:1234/v1
Thanks for the update @theaxxxin . It is possible that on some machines, the full ip address (instead of http://localhost) is needed.
On my machine worked the opposite, meaning that http://0.0.0.0:4000 was not working while http://localhost:4000 did work. Always with "not-needed" as api key. Testing with llama3 through ollama.
still happening with Ollama LLM have stripped everytthing back and the model tests fine in Autogen Studio but runs with the error:
_ValueError: api_key is not present in llm_config or OPENAI_API_KEY env variable for agent adcopy_critic_agent. Update your workflow to provide an apikey to use the LLM.
I stripped everything away and rebuilt. same problem.
but all the agents, workflows, and models are using the Ollama LLM. It tests fine with model settings:
model name: mistral API KEY: NA base URL: http://localhost:11434/v1
but the error remains the moment I use the playground. me sad.
I'm having the exact same error. went through the exact same process.
API KEY: NA
have you tried this?
not-needed
API KEY: NA
have you tried this?
tried every configuration of everything, nothing worked. I gave up and went back to CrewAI.
I cannot repro. @victordibia - do you think this is still an issue?
Already fixed in #1428 . Closing.
Describe the bug
After setting up Agents and workflow with local endpoints. Getting this error message
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
when running in PlayGround.Steps to reproduce
Ollama
with mistral model Mistrallitellm
and upgradegunicorn
litellm
will behttp://0.0.0.0:8000/
mistral
and base url to behttp://0.0.0.0:8000/
give it any system prompt, then savehttp://0.0.0.0:8000/
for both the Sender and Receiver in the Workflow. Then SaveExpected Behavior
When I try
hello there
I expect a response, not an error message from the application.Screenshots and logs
Traceback (most recent call last): File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/web/app.py", line 68, in add_message response_message: Message = chatmanager.chat( ^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/chatmanager.py", line 22, in chat flow = AutoGenWorkFlowManager(config=flow_config, history=history, work_dir=scratch_dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 35, in init self.sender = self.load(config.sender) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 156, in load agent = self.load_agent_config(agent_spec.config, agent_spec.type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogenstudio/workflowmanager.py", line 175, in load_agent_config agent = autogen.UserProxyAgent(agent_config.dict()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/agentchat/user_proxy_agent.py", line 83, in init super().init( File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 126, in init self.client = OpenAIWrapper(self.llm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/autogen/oai/client.py", line 105, in init self._clients = [self._client(extra_kwargs, openai_config)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3. 11.6/lib/python3.11/site-packages/autogen/oai/client.py", line 162, in _client client = OpenAI(**openai_config) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/bear/.pyenv/versions/3.11.6/lib/python3.11/site-packages/openai/_client.py", line 97, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Additional Information
AutoGen Version: autogenstudio-0.0.25a0 Operating System: Windows and Linux Python Version: 3.11