microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.63k stars 4.6k forks source link

[Issue]: Ollama does not work with tool use #2924

Open haozhuang0000 opened 3 months ago

haozhuang0000 commented 3 months ago

Describe the issue

I use ollama with autogen, and it ignore my tool. Please find the code below: (From: https://microsoft.github.io/autogen/docs/tutorial/tool-use/#:~:text=The%20agent%20registered%20with%20the,with%20both%20agents%20at%20once.)

from typing import Annotated, Literal

Operator = Literal["+", "-", "*", "/"] config_list = [ { "model": "llama3:70b-instruct", "base_url": "http://localhost:11434/v1", "api_key": "ollama", } ]

def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int: if operator == "+": return a + b elif operator == "-": return a - b elif operator == "": return a b elif operator == "/": return int(a / b) else: raise ValueError("Invalid operator")

import os

from autogen import ConversableAgent

assistant = ConversableAgent( name="Assistant", system_message="You are a helpful AI assistant. " "You can help with simple calculations." "Return 'TERMINATE' when the task is done.", llm_config={"config_list": config_list}, )

user_proxy = ConversableAgent( name="User", llm_config=False, is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"], human_input_mode="NEVER", )

from autogen import register_function register_function( calculator, caller=assistant, # The assistant agent can suggest calls to the calculator. executor=user_proxy, # The user proxy agent can execute the calculator calls. name="calculator", # By default, the function name is used as the tool name. description="A simple calculator", # A description of the tool. )

chat_result = user_proxy.initiate_chat(assistant, message="What is (44232 + 13312 / (232 - 32)) * 5?")

My output:

Assistant (to User):

I'll help you with that calculation using the calculator tool.

First, let's follow the order of operations (PEMDAS):

  1. Evaluate the expression inside the parentheses: 232 - 32 = 200
  2. Now, divide 13312 by 200: 13312 ÷ 200 = 66.56
  3. Add 44,232 and 66.56: 44,232 + 66.56 = 44,298.56
  4. Multiply the result by 5: 44,298.56 × 5 = 221,492.8

So, the final answer is:

(44,232 + 13312 / (232 - 32)) * 5 = 221,492.8

Hk669 commented 3 months ago

@haozhuang0000 that could be because of the llama3 model. try with any other model and i think it should work. please let us know. thanks

haozhuang0000 commented 3 months ago

Hi @Hk669,

Thanks for your reply. I removed all the "-" for better print in markdown

I tried Phi3:latest:

User (to Assistant):

What is (44232 + 13312 / (232 - 32)) * 5?

USING AUTO REPLY... Assistant (to User):

First, perform the operation inside the parentheses:

(232 - 32) = 200

Now we can compute the division and addition as follows:

13312 / 200 = 66.56 (rounded to two decimal places)

44232 + 66.56 = 44298.56

Finally, multiply by 5:

44298.56 * 5 = 221492.80

So, the result is 221492.80.

Gemma:7b

it runs forever and it does not give correct output:

User (to Assistant):

What is (44232 + 13312 / (232 - 32)) * 5?

USING AUTO REPLY... Assistant (to User):

(44232 + 13312 / (232 - 32)) * 5 = 145314.77

User (to Assistant):

USING AUTO REPLY... Assistant (to User):

User (to Assistant):

cannin commented 3 months ago

@haozhuang0000 I had the same issue with llama3. I used litellm (https://github.com/BerriAI/litellm) with ollama and then functions worked.

config_list = [
        {
            "model": "ollama_chat/llama3:8b", 
            "api_key": "NA", 
            "base_url": "http://0.0.0.0:4000", 
        }
    ]
haozhuang0000 commented 3 months ago

@haozhuang0000 I had the same issue with llama3. I used litellm (https://github.com/BerriAI/litellm) with ollama and then functions worked.

config_list = [
        {
            "model": "ollama_chat/llama3:8b", 
            "api_key": "NA", 
            "base_url": "http://0.0.0.0:4000", 
        }
    ]

Hi @cannin,

Thanks for your help, I used litellm and then it give me error.

USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_46327cd5-545d-4a88-94e9-afb879dc3979): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}


EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_46327cd5-545d-4a88-94e9-afb879dc3979) 588816384


USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_a180c6ce-ccc2-428f-9aab-f65edbee0d36): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}


EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_a180c6ce-ccc2-428f-9aab-f65edbee0d36) 588816384


USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_fbc733ab-a4f1-42ce-89dc-cf2bb86c6272): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}


EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_fbc733ab-a4f1-42ce-89dc-cf2bb86c6272) 588816384


openai.InternalServerError: Error code: 500 - {'error': {'message': "'arguments'", 'type': None, 'param': None, 'code': 500}}

scruffynerf commented 3 months ago

I'll follow up once I have a PR for this (I have working code, just deciding the best way to make adding this easiest)

haozhuang0000 commented 3 months ago

I'll follow up once I have a PR for this (I have working code, just deciding the best way to make adding this easiest)

@scruffynerf Thanks!

CorrM commented 3 months ago

@haozhuang0000 I had the same issue with llama3. I used litellm (https://github.com/BerriAI/litellm) with ollama and then functions worked.

config_list = [
        {
            "model": "ollama_chat/llama3:8b", 
            "api_key": "NA", 
            "base_url": "http://0.0.0.0:4000", 
        }
    ]

Hi @cannin,

Thanks for your help, I used litellm and then it give me error.

USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_46327cd5-545d-4a88-94e9-afb879dc3979): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}

EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_46327cd5-545d-4a88-94e9-afb879dc3979) 588816384

USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_a180c6ce-ccc2-428f-9aab-f65edbee0d36): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}

EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_a180c6ce-ccc2-428f-9aab-f65edbee0d36) 588816384

USING AUTO REPLY... Assistant (to User):

Suggested tool call (call_fbc733ab-a4f1-42ce-89dc-cf2bb86c6272): calculator Arguments: {"a": 44232, "b": 13312, "operator": "*"}

EXECUTING FUNCTION calculator... User (to Assistant):

User (to Assistant):

Response from calling tool (call_fbc733ab-a4f1-42ce-89dc-cf2bb86c6272) 588816384

openai.InternalServerError: Error code: 500 - {'error': {'message': "'arguments'", 'type': None, 'param': None, 'code': 500}}

Improve the 'chatbot' system_message prompt and use latest litellm version