phidatahq / phidata

Build AI Agents with memory, knowledge, tools and reasoning. Chat with them using a beautiful Agent UI.
https://docs.phidata.com
Mozilla Public License 2.0
13.85k stars 1.95k forks source link

When Params stream=True, Agent call function tools error. #1258

Open ZQFORWARD opened 1 week ago

ZQFORWARD commented 1 week ago

when i build an agent use tools, set print_response params stream=False, it can work, but set True, the function tool can not be calling correctly.

manthanguptaa commented 1 week ago

Hey @ZQFORWARD, can you provide the code snippet that gave you this error?

ZQFORWARD commented 1 week ago
import uuid
import json
import pickle
import numpy as np
from pathlib import Path
from typing import Optional, List
from textwrap import dedent
from phi.assistant import Assistant
from phi.knowledge import AssistantKnowledge
from phi.utils.log import logger
from phi.tools import Toolkit
from phi.agent import Agent
from phi.model.ollama import Ollama

def generate_random_file_path(directory, extension='.pkl'):
    """
    Generate random file path
    :param directory: str: target file folder
    :param extension: str: file extension name, default: .pkl
    :return: str: file path
    """
    if not os.path.exists(directory):
        os.makedirs(directory)
    random_file_name = f"{uuid.uuid4()}{extension}"
    return os.path.join(directory, random_file_name)

def save_data2local(data, directory="jarvis/temp_data"):
    """
    Save data to target file path
    :param data: data need saved
    :param directory: folder path
    :return: saved file path
    """
    file_path = generate_random_file_path(directory)

    # save data
    try:
        with open(file_path, 'wb') as file:
            pickle.dump(data, file)
        print(f"save data success: {file_path}")
        return file_path

    except Exception as e:
        print(f"save data error: {e}")
        return None

def load_data_from_local(file_path):
    """
    Load data from target path
    :param file_path:
    :return: loaded data
    """
    if os.path.exists(file_path):
        try:
            with open(file_path, 'rb') as file:
                data = pickle.load(file)
            print(f"load data from: {file_path}")
            return data
        except Exception as e:
            print(f"load data failed: {e}")

    else:
        print(f"file: {file_path} not exists")
        return None

class ExtractInfoTools(Toolkit):
    def __init__(self):
        super().__init__(name="extract_information")
        self.register(self.extract_info)

    def extract_info(self, caseId):
        """
        Extract select case time
        :param caseId: str: selected case ID
        :return: str: JSON string of the result
        """

        res = {'pointId': "test", 'features': 202, 'start_time': "2023-10-01",
                'end_time': "2023-10-07"}
        result = {"Operation": "extract info", "result": res}
        return json.dumps(result)

class GetTrendTools(Toolkit):
    def __init__(self):
        super().__init__(name="get fe trend")
        self.register(self.get_fe_trend)

    def get_fe_trend(self, pointId, features, start_time, end_time):
        """
        Get target point feature trend data

        :param pointId: str: point ID
        :param features: list: features id
        :param start_time: str: start time
        :param end_time: str: end time
        :return: str: JSON string of the result
        """
        if isinstance(features, int):
            features = [features]
        assert isinstance(features, list), "features must be list"
        fe_data = np.random.random((1, 16 * 1024))
        save_path = save_data2local(fe_data)

        return json.dumps({"operation": "get fe trend", "result": {"file_path": save_path}})

class TrendAnalysisTools(Toolkit):
    def __init__(self):
        super().__init__(name='data trend analysis')
        self.register(self.do_trend_analysis)

    def do_trend_analysis(self, file_path):
        """
        Use this function to read saved data trend, analysis and plot figure.
        :param file_path: str: saved trend data file path
        :return:
        """
        data = load_data_from_local(file_path)

        logger.info(f"file_path: {file_path}")

        fig_url = "trend.html"
        logger.info(f"data trend analysis success, and fig url is {fig_url}")

        model_output = {'output': 'This Trend is Normal', 'fig_url': fig_url}

        return json.dumps({'operation': 'data_trend_analysis', 'output': 'This trend is normal', 'fig_url': fig_url})

if __name__ == "__main__":
    llm = Ollama(id="qwen2.5:7b", host="http://localhost:11434")
    tools = [ExtractInfoTools(), GetTrendTools(), TrendAnalysisTools()]

    description = dedent(
            f"""\ 
            The primary rule is that you are not allowed to fabricate any file paths, and all file paths must come from 
            input or other tools' returns. \n
            You have access to a set of tools and a team of AI Assistants at your disposal.
            You goal is to assist the user and diagnosis engineer in the best way possible.
            You should try you best to answer question in Chinese.
            You should carefully check the input parameters of the tool before you calling a tool and the return 
            content of other tools, as some tools' return contents is the input of another tool. \n
            "When tools input parameters is file path, it is necessary to carefully confirm whether the file path is "
            "the result returned by the relevant tool, and be sure not to fabricate the file path. \n"
            If the user asks about analysis a case and give a case id, first ALWAYS using the `extract_case_info` tool 
            to extract case information. \n
            Then, use get the case trend data info using the `get_fe_trend` tool, \n
            After get trend data, analysis trend data using `do_trend_analysis` tool \n
            Do NOT DROP the fig url when response to user even analysis is normal.
            """
        )
    instruction = [
    "The primary rule is that you are not allowed to fabricate any file paths, and all file paths must come "
    "from input or other tools' returns. \n"
    "You should carefully check the input parameters of the tool before you calling a tool and the return "
    "content of other tools, as some tools' return contents is the input of another tool. \n"
    "If the user asks about analysis a case and give a case id, first ALWAYS using the"
    "`extract_case_info` tool to extract case information."
    "Then, using the `get_fe_trend` tool to get the case trend data and save to target file path. \n"
    "When tools input parameters is file path, it is necessary to carefully confirm whether the file path is "
    "the result returned by the relevant tool, and be sure not to fabricate the file path. \n"
    "After get trend data, analysis trend data using `do_trend_analysis` tool. \n"
    "Always use tables to display result."
    "Do NOT DROP the fig url when response to user even analysis is normal."
    ]
    agent = Agent(
        model=llm,
        name="TestAgent",
        storage=None,
        description=description,
        instruction=instruction,
        tools=tools,
        show_tool_calls=False,
        add_references_to_prompt=True,
        markdown=True,
        add_datetime_to_instructions=True,
        debug_mode=False,
        reasoning=False,
        structured_outputs=True
    )
    agent.print_response("Analysis This Case: ababaabba", stream=True)

@manthanguptaa Hey, here is my test code, when stream=False,it can work. when it changes to False, it can not work and say 'case Id not given'

jacobweiss2305 commented 1 week ago

@ZQFORWARD the main issue here is that we havent built function calling for Qwen on Ollama. Each model has their own function calling protocals. Can you try Llama 3.1?

I also noticed a few other issues with the Agent:

ZQFORWARD commented 6 days ago

@jacobweiss2305 Thank you for your reply! It is confusing that it seems to work properly when the stream parameter is set to False. Perhaps there are some other issues.

ZQFORWARD commented 6 days ago

@jacobweiss2305 After replacing the llama 3.1:8b model, the situation seemed even worse with the same code and configuration.

manthanguptaa commented 5 days ago

@ZQFORWARD, what results are you getting with llama 3.1:8b? Are the results bad, or are you not able to run the code with stream=True?

ZQFORWARD commented 3 days ago

@manthanguptaa the results bad, even can not call tools correctly

manthanguptaa commented 3 days ago

@ZQFORWARD llama 3.1:8b isn't a very powerful model and has a 30-40% hit ratio with its correctness. It will struggle to give you good results. You might want to try with some other model like OpenAI gpt-4o or llama 3.1:70b, which will again give decent results, but as it's a local first model, the results might not be up to the mark.