Open lambda7xx opened 1 month ago
Hey @lambda7xx, if you're able to run models using Ollama, I'd recommend trying the code in PR #3056, as it's a custom client class specifically for running local models in Ollama.
3056
thanks. let me try it
@marklysze hi, here are my steps 1 terminate 1: ollama run llama3.1:70b 2 terminatr 2: my code
# THIS TESTS: TWO AGENTS WITH TERMINATION
altmodel_llm_config = {
"config_list":
[
{
"api_type": "ollama",
"model": "llama3.1:70b",
"client_host": "http://192.168.0.1:11434",
"seed": 42,
"api_key": "NULL",
}
]
}
from autogen import ConversableAgent
jack = ConversableAgent(
"Jack",
llm_config=altmodel_llm_config,
system_message="Your name is Jack and you are a comedian in a two-person comedy show.",
is_termination_msg=lambda x: True if "FINISH" in x["content"] else False
)
emma = ConversableAgent(
"Emma",
llm_config=altmodel_llm_config,
system_message="Your name is Emma and you are a comedian in two-person comedy show. Say the word FINISH ONLY AFTER you've heard 2 of Jack's jokes.",
is_termination_msg=lambda x: True if "FINISH" in x["content"] else False
)
chat_result = jack.initiate_chat(emma, message="Emma, tell me a joke about goldfish and peanut butter.", max_turns=10)
my log is
Traceback (most recent call last):
File "/home/xiao/autogen/ollama_use.py", line 17, in <module>
jack = ConversableAgent(
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 160, in __init__
self._validate_llm_config(llm_config)
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 264, in _validate_llm_config
self.client = None if self.llm_config is False else OpenAIWrapper(**self.llm_config)
File "/home/xiao/autogen/autogen/oai/client.py", line 422, in __init__
self._register_default_client(config, openai_config) # could modify the config
File "/home/xiao/autogen/autogen/oai/client.py", line 523, in _register_default_client
client = OpenAI(**openai_config)
File "/home/xiao/anaconda3/envs/autogen/lib/python3.10/site-packages/openai/_client.py", line 105, in __init__
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
(autogen) xiao@ptc:~/autogen$ python3 ollama_use.py
[autogen.oai.client: 08-01 07:20:37] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
[autogen.oai.client: 08-01 07:20:37] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
Jack (to Emma):
Emma, tell me a joke about goldfish and peanut butter.
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Traceback (most recent call last):
File "/home/xiao/autogen/ollama_use.py", line 31, in <module>
chat_result = jack.initiate_chat(emma, message="Emma, tell me a joke about goldfish and peanut butter.", max_turns=10)
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 1012, in initiate_chat
self.send(msg2send, recipient, request_reply=True, silent=silent)
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 656, in send
recipient.receive(message, self, request_reply, silent)
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 819, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 1973, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 1341, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
File "/home/xiao/autogen/autogen/agentchat/conversable_agent.py", line 1360, in _generate_oai_reply_from_client
response = llm_client.create(
File "/home/xiao/autogen/autogen/oai/client.py", line 732, in create
response = client.create(params)
File "/home/xiao/autogen/autogen/oai/client.py", line 320, in create
response = completions.create(**params)
File "/home/xiao/anaconda3/envs/autogen/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
TypeError: Completions.create() got an unexpected keyword argument 'client_host'
Hey @lambda7xx, it doesn't look like the Ollama code files are being used, the code files in the branch need to replace your installed ones, see the files here: https://github.com/microsoft/autogen/pull/3056/files
You just need to find your pyautogen install and replace the content of these 5 files with the ones from that link above (note that ollama.py
is a new file):
Hey @lambda7xx, it doesn't look like the Ollama code files are being used, the code files in the branch need to replace your installed ones, see the files here: https://github.com/microsoft/autogen/pull/3056/files
You just need to find your pyautogen install and replace the content of these 5 files with the ones from that link above (note that
ollama.py
is a new file):
thank you so much.
Describe the issue
I use the Local-LLMs/ to deploy my local model but the result by llm is weird
Steps to reproduce
lunch the local model
In terminate 1: my command is 'python -m fastchat.serve.controller ' In terminate 2: command is
python -m fastchat.serve.model_worker --model-path chatglm2-6b
In terminate 3: command ispython -m fastchat.serve.openai_api_server --host localhost --port 8000
code
My script is
My OAI_CONFIG_LIST is
Screenshots and logs
my log is
I think the assistant agent should write a python code but it does not.
Additional Information
No response