Closed ragesh2000 closed 1 month ago
Does your local LLM support function calling?
@kevin666aa
Yes it supports function calling
Yes it supports function calling
Which model specifically are you using? Do you have examples of it successfully incurring function calls without AutoGen?
I believe this is the same issue we are trying to solve in https://github.com/microsoft/autogen/issues/1206
@ragesh2000 can you please check if it is still failing for you with the latest version from git, you can install it with
pip install git+https://github.com/microsoft/autogen.git@main
Now I am getting an error as soon as I start running @davorrunje
@ragesh2000 can you post your code? The currency function notebook you linked works for me on main branch. The error message looks like a misconfiguration of llm_config.
Sure
import autogen
import pandas as pd
import os
# from typing import Literal
from typing_extensions import Annotated
config_list = [
{
'base_url': "http://0.0.0.0:8000",
'api_key': "NULL"
}
]
llm_config = {
"config_list": config_list,
"timeout": 120,
}
chatbot = autogen.AssistantAgent(
name="chatbot",
system_message="For currency exchange tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.",
llm_config=llm_config,
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
)
CurrencySymbol = Literal["USD", "EUR"]
def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
if base_currency == quote_currency:
return 1.0
elif base_currency == "USD" and quote_currency == "EUR":
return 1 / 1.1
elif base_currency == "EUR" and quote_currency == "USD":
return 1.1
else:
raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")
@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
base_amount: Annotated[float, "Amount of currency in base_currency"],
base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
return f"{quote_amount} {quote_currency}"
assert user_proxy.function_map["currency_calculator"]._origin == currency_calculator
# start the conversation
user_proxy.initiate_chat(
chatbot,
message="How much is 123.45 USD in EUR?",
)
Also Iam using llama2 model using litellm @ekzhu
I see. It looks like you may have to specify a model
value in the config list entry.
Can i set it as the model I am using?
'model': 'llama2'
?
You can try it. We are currently relying on the openai client library. But we are currently working toward customizable client #831
It wasn't working.
It wasn't working.
Did you get a new error message?
iam getting the following error message when i set it to llama2
Traceback (most recent call last):
File "/home/gpu/ai/llm/autogen/userproxy_test.py", line 64, in <module>
user_proxy.initiate_chat(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 667, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 420, in send
recipient.receive(message, self, request_reply, silent)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 573, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1239, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 754, in generate_oai_reply
response = client.create(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/oai/client.py", line 278, in create
response = self._completions_create(client, params)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/oai/client.py", line 543, in _completions_create
response = completions.create(**params)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 1112, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 859, in request
return self._request(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 934, in _request
return self._retry_request(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _retry_request
return self._request(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 934, in _request
return self._retry_request(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _retry_request
return self._request(
File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 949, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'detail': 'ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nTraceback (most recent call last):\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 417, in completion\n optional_params = get_optional_params(\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 2383, in get_optional_params\n _check_valid_arg(supported_params=supported_params)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 2065, in _check_valid_arg\n raise UnsupportedParamsError(status_code=500, message=f"{custom_llm_provider} does not support parameters: {unsupported_params}. To drop these, set `litellm.drop_params=True`.")\nlitellm.utils.UnsupportedParamsError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 189, in acompletion\n response = await loop.run_in_executor(None, func_with_context)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/concurrent/futures/thread.py", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1482, in wrapper\n raise e\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1411, in wrapper\n result = original_function(*args, **kwargs)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 1425, in completion\n raise exception_type(\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4763, in exception_type\n raise e\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4733, in exception_type\n raise APIConnectionError(\nlitellm.exceptions.APIConnectionError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 950, in chat_completion\n response = await litellm.acompletion(**data)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1578, in wrapper_async\n raise e\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1523, in wrapper_async\n result = await original_function(*args, **kwargs)\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 196, in acompletion\n raise exception_type(\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4763, in exception_type\n raise e\n File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4733, in exception_type\n raise APIConnectionError(\nlitellm.exceptions.APIConnectionError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n'}
@ragesh2000 a fix for this was merged yesterday (#1227). Can you please install the latest version from the github and try again:
pip install git+https://github.com/microsoft/autogen.git@main
Actually this error message was coming from the latest version(0.2.7)
Just now I confirmed it by uninstalling and reinstalling from git that the error is same. @davorrunje
I think the issue is caused by the model itself does not support function calling.
Have you tried to enable the adding function call to prompt feature offered by litellm? https://litellm.vercel.app/docs/completion/function_call#function-calling-for-non-openai-llms
yes i have enabled addfunction call to prompt
feature by litellm.
Now I just tried with another model which is fine tuned for function calling https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2/blob/main/llama-2-7b-function-calling.Q3_K_M.gguf
The result was same. @ekzhu
Forgot to mention, in a recent release we added a backward compatibility for function calling for older OpenAI API versions. I am not actively following litellm's API specs. Do they support tool calls for Non-OpenAI models? You can try adding api_style="function"
to see if this helps.
@agent2.register_for_llm(description="...", api_style="function")
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str:
return a + str(b * c)
Adding api_style="function" helped me to get rid of that error message. But now also the problem is my assistant agent is aware of the function to use but not the user proxy. Is that the issue with the model iam using ? @ekzhu
@ragesh2000 Did you register the function for execution with user_proxy
(see https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#tool-calling)? Something like this:
@user_proxy. register_for_execution()
@agent2.register_for_llm(description="...", api_style="function")
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str:
return a + str(b * c)
Yes I did
Oh, the screenshot above indicates that the model tried to execute Python code, not to call the function. Could you please share the source code of your example?
Sure
import autogen import pandas as pd import os # from typing import Literal from typing_extensions import Annotated config_list = [ { 'base_url': "http://0.0.0.0:8000", 'api_key': "NULL" } ] llm_config = { "config_list": config_list, "timeout": 120, } chatbot = autogen.AssistantAgent( name="chatbot", system_message="For currency exchange tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.", llm_config=llm_config, ) # create a UserProxyAgent instance named "user_proxy" user_proxy = autogen.UserProxyAgent( name="user_proxy", is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"), human_input_mode="NEVER", max_consecutive_auto_reply=10, ) CurrencySymbol = Literal["USD", "EUR"] def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float: if base_currency == quote_currency: return 1.0 elif base_currency == "USD" and quote_currency == "EUR": return 1 / 1.1 elif base_currency == "EUR" and quote_currency == "USD": return 1.1 else: raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}") @user_proxy.register_for_execution() @chatbot.register_for_llm(description="Currency exchange calculator.") def currency_calculator( base_amount: Annotated[float, "Amount of currency in base_currency"], base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD", quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR", ) -> str: quote_amount = exchange_rate(base_currency, quote_currency) * base_amount return f"{quote_amount} {quote_currency}" assert user_proxy.function_map["currency_calculator"]._origin == currency_calculator # start the conversation user_proxy.initiate_chat( chatbot, message="How much is 123.45 USD in EUR?", )
Also Iam using llama2 model using litellm @ekzhu
This is the code @davorrunje
I assumed you added api_style="function"
. Is there any other change? For some reason, code execution is enabled and it is not in the example you just shared above.
Yes I added api_style="function"
. Sorry to mention that in the above code.
Did you use code_execution_config
in your example?
No
Is there a way to expose your LiteLLM endpoint to me so I can debug it? You can DM me on Discord with info as you obviously don't want to make it public.
Sorry I can't reveal the endpoint. Is there any other way that you can debug ?
Can you set up an endpoint just for debugging and kill it after we are done?
I think I know what's is going on. The UserProxyAgent is registered with the function but this is only handled via generate_tool_call_reply method, which only comes to effect when the incoming message has a function_call
field. Given the model does not support function call, litellm adds the function signature into the prompt itself, the model generates the function call inside the "content" -- not the "function_call" field. So the UserProxyAgent goes straight into code execution mode and tries to execute the function call but without the function being defined first.
To make this work. First, we need the model to generate a structured field that contains the function call and its parameters, say something like {"function_call": {"name": "calculator", "arguments": [...]}}. The field should be serialized and put inside the "content" part of the message. This might be achieved via Guidance. Second, we need to register a new reply function to the UserProxyAgent that can parse the structured field to convert the input parameters into Python objects, and then calls the registered function, then return the result. Because model is not GPT-4, you may also need to add a few context to the result such as "The function ... returns ...".
Example on AutoGen + Guidance: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb
I was following exactly as the notebook for function calling https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_currency_calculator.ipynb but instead of the output shown there Iam keep on getting an error The only change i made is instead of open ai model i used open source model using litellm. Can anybody tell me why it is happening ?