microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
34.12k stars 4.93k forks source link

Issue tool calling using ollama #4225

Closed Darkbelg closed 1 hour ago

Darkbelg commented 2 hours ago

What happened?

Trying to use tool calling with ollama. My setup might be a bit strange. I have a docker image where i put all my python code in. I then have ollama installed outside of the docker. It works with conversations but the moment i introduce tools it gives me an error. And i can't figure out a fix. I am trying an adjusted calc tool call from the tutorial. When i try to run my script i get this:

 docker build --no-cache -t autogen-test .
docker run --add-host=host.docker.internal:host-gateway autogen-test
Traceback (most recent call last):
  File "/myapp/ollama.py", line 33, in <module>
    assistant = ConversableAgent(
                ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 170, in __init__
    self._validate_llm_config(llm_config)
  File "/usr/local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 274, in _validate_llm_config
    self.client = None if self.llm_config is False else OpenAIWrapper(**self.llm_config)
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/autogen/oai/client.py", line 503, in __init__
    self._register_default_client(config, openai_config)  # could modify the config
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/autogen/oai/client.py", line 613, in _register_default_client
    raise ImportError("Please install `ollama` and `fix-busted-json` to use the Ollama API.")
ImportError: Please install `ollama` and `fix-busted-json` to use the Ollama API.

What did you expect to happen?

The calculation tool to be called

How can we reproduce it (as minimally and precisely as possible)?

Here is the code i use. Some packages are part of other scripts. Dockerfile:

FROM python:3.11-slim

WORKDIR /myapp

RUN pip install --no-cache-dir "pyautogen[ollama]>=0.2.38" \
    && pip install --no-cache-dir requests \
    && pip install --no-cache-dir ollama \
    && pip install --no-cache-dir fix-busted-json

COPY ollama.py .

CMD ["python", "ollama.py"]

ollama.py:

from typing import Annotated, Literal

Operator = Literal["+", "-", "*", "/"]

def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int:
    if operator == "+":
        return a + b
    elif operator == "-":
        return a - b
    elif operator == "*":
        return a * b
    elif operator == "/":
        return int(a / b)
    else:
        raise ValueError("Invalid operator")

import os

from autogen import ConversableAgent

config_list = [
    {
        "model": "qwen2.5-coder:14b",
        "api_type": "ollama",
        "client_host": "http://host.docker.internal:11434",
        "native_tool_calls": False
    }
]

# Let's first define the assistant agent that suggests tool calls.
assistant = ConversableAgent(
    name="Assistant",
    system_message="You are a helpful AI assistant. "
    "You can help with simple calculations. "
    "Return 'TERMINATE' when the task is done.",
    llm_config={"config_list": config_list},
)

# The user proxy agent is used for interacting with the assistant agent
# and executes tool calls.
user_proxy = ConversableAgent(
    name="User",
    llm_config=False,
    is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
    human_input_mode="NEVER",
)

# Register the tool signature with the assistant agent.
assistant.register_for_llm(name="calculator", description="A simple calculator")(calculator)

# Register the tool function with the user proxy agent.
user_proxy.register_for_execution(name="calculator")(calculator)

chat_result = user_proxy.initiate_chat(assistant, message="What is (44232 + 13312 / (232 - 32)) * 5?")
export OLLAMA_HOST=0.0.0.0:11434
ollama serve
 ollama pull qwen2.5-coder:14b
docker build --no-cache -t autogen-test .
docker run --add-host=host.docker.internal:host-gateway autogen-test

AutoGen version

0.2.38

Which package was this bug in

Core

Model used

ollama qwen2.5-coder:14b

Python version

3-11

Operating system

ubuntu

Any additional info you think would be helpful for fixing this bug

No response

ekzhu commented 2 hours ago

Can you uninstall pyautogen and install autogen-agentchat~=0.2. I think this issue has been fixed in the latest version.

Darkbelg commented 1 hour ago

I named the file ollama because i was testing ollama... Apparently your not supposed to do that. I renamed it to ollama-test.py and now i have no errors. Thank you for the quick response @ekzhu