crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.65k stars 2.57k forks source link

SerperDevTool doesn't work without OpenAI #322

Closed varskann closed 3 days ago

varskann commented 5 months ago

Reproducing the example https://docs.crewai.com/core-concepts/Tasks/#creating-a-task-with-tools

The code works fine when used with OpenAI models, but not when trying to use Llamma2 with Ollama

Screenshot 2024-03-06 at 12 56 06 PM

Code Snipped to Load Llama2:

from langchain_community.llms import Ollama

# To Load Local models through Ollama
llm_llama = Ollama(model="llama2")

research_agent = Agent(
    role='Researcher',
    goal='Find and summarize the latest AI news',
    backstory="""You're a researcher at a large company.
    You're responsible for analyzing data and providing insights
    to the business.""",
    verbose=False,
    llm=llm_llama
)

Even SerperDev fails to fetch news while using Llama2 model:

Screenshot 2024-03-06 at 1 00 30 PM
torvicvasil commented 5 months ago

I'm having the same issue. It works with OpenAI API but not with llama2 locally.

maximinus commented 5 months ago

I think I have the same issues. With a local llm the tool is used incorrectly. Either it hallucinates a name or it hallucinates the arguments. For example, a simple tool:

@tool('pythonrunner')
def pythonrunner(python_code: str) -> str:
    """
    This tool can run Python code. Pass the code and this tool will run it as a file on the command line.
    The returned value will be the output of the python code, so if you pass print('Hello') it will give you back Hello.
    This tool will put your code into a file and then run the file as python -c your_code_file.py in a virtual machine.
    So pass this tool the python code as you would expect it to look in a file. You will have to indent the code properly.
    If the code has an error, you will get the expected Python error code.
    If the code has no output, you will get an empty string.
    """
    # dummy dunction for now
    return ''

And it logs things like:

Action: run_prime_numbers
Action Input: {'function': 'prime_numbers(10)'}. 

Action: Create an empty dictionary for storing functions
Action Input: {} 

Action: ComputePrimes
Action Input: {"function_name": "ComputePrimes", "args": [], "body": """}
vasiliyeskin commented 5 months ago

You must use llama2 as "function_calling_llm" of agent. See example below:

        import os
        os.environ["SERPER_API_KEY"] = "111!?!111" # serper.dev API key

        from crewai import Agent, Task, Crew
        from crewai_tools import SerperDevTool
        from crewai_tools import tool

        from langchain_community.llms import Ollama
        ollama_llama2_13b = Ollama(model="llama2:13b")

        @tool('search_tool')
        def search(search_query: str):
            """Search the web for information on a given topic"""
            response = SerperDevTool().run(search_query)
            return response

        search_tool = search
        # search_tool = SerperDevTool()

        research_agent = Agent(
            role='Researcher',
            goal='Find and summarize the latest AI news',
            backstory="""You're a researcher at a large company.
            You're responsible for analyzing data and providing insights
            to the business.""",
            verbose=True,
            llm = ollama_llama2_13b,
            tools=[search_tool],
            function_calling_llm=ollama_llama2_13b
        )

        task = Task(
          description='Find and summarize the latest AI news',
          expected_output='A bullet list summary of the top 5 most important AI news',
          agent=research_agent
        )

        crew = Crew(
            agents=[research_agent],
            tasks=[task],
            verbose=True
        )

        result = crew.kickoff()
        print(result)

Close issue

ging3rtastic commented 5 months ago

I am having the same problem.

Used the function_calling_llm parameter as well, tried with both Llama2 and Mistral.

Some times my first agent gets it correct and then after that the second agent who used the Serper Tool cannot figure out how to use the tool. Both incorrect Input parameters and Action.

bluesockets commented 5 months ago

I'm also having the same problem, and I'm using Ollama -> Mistral.

I used the modified solution mentioned above with the custom LLM function call but it ends up searching for an unseriealozed JS object.

It looks like the query string itself isn't being passed into the annotated search tool so it Google searches for '[object, Object]`. When I hard code the search string it seems to work so my guess is the code that calls the search to isn't parsing the search text correctly.

Can anyone explain how this function (@tool) is called? I can take a crack at fixnling it but I am not quite sure how these calls are stitched together.

bluesockets commented 4 months ago

I got it to work with the example shown above, and with the openhermes ai model. I had to downgrade crewai to 0.22.4. It seems the AI model is given the tool code, which it then attempts to infer property useage. It seems to get this wrong occasionally so it's non-deterministic. I checked the changes between 0.22.4 and 0.22.5 and it's like 2 line of code. I have no idea why such a small change would impact this functionality. weird!

ging3rtastic commented 4 months ago

I got it to work with the example shown above, and with the openhermes ai model. I had to downgrade crewai to 0.22.4. It seems the AI model is given the tool code, which it then attempts to infer property usage. It seems to get this wrong occasionally so it's non-deterministic. I checked the changes between 0.22.4 and 0.22.5 and it's like 2 line of code. I have no idea why such a small change would impact this functionality. weird!

That's great I am going to give this a try on my side. I wonder if there is a better way to give the agents a break down of how to use the tools? Would you be able to share which use the lines of code that were changed? Just for interests sake?

bluesockets commented 4 months ago

I had to fix spelling of "co-worker" -> "coworker", that allowed them to collaborate. I also changed the AI Model to openhermes via ollama. I also used the code from the stock examples (https://github.com/joaomdmoura/crewAI-examples/blob/main/stock_analysis/tools/search_tools.py#L9) and applied that structure to my scripts. I don't think it matters as much as it's basically the same as the embedded solution mentioned above.

Here's the co-worker issue: https://github.com/joaomdmoura/crewAI/issues/351

vasiliyeskin commented 4 months ago

I got it to work with the example shown above, and with the openhermes ai model. I had to downgrade crewai to 0.22.4. It seems the AI model is given the tool code, which it then attempts to infer property useage. It seems to get this wrong occasionally so it's non-deterministic. I checked the changes between 0.22.4 and 0.22.5 and it's like 2 line of code. I have no idea why such a small change would impact this functionality. weird!

Great! It works for me at "openhermes" LLM. But "llama2:13b" gives not consistent data.

GIJosh2687 commented 4 months ago

You must use llama2 as "function_calling_llm" of agent. See example below:

        import os
        os.environ["SERPER_API_KEY"] = "111!?!111" # serper.dev API key

        from crewai import Agent, Task, Crew
        from crewai_tools import SerperDevTool
        from crewai_tools import tool

        from langchain_community.llms import Ollama
        ollama_llama2_13b = Ollama(model="llama2:13b")

        @tool('search_tool')
        def search(search_query: str):
            """Search the web for information on a given topic"""
            response = SerperDevTool().run(search_query)
            return response

        search_tool = search
        # search_tool = SerperDevTool()

        research_agent = Agent(
            role='Researcher',
            goal='Find and summarize the latest AI news',
            backstory="""You're a researcher at a large company.
            You're responsible for analyzing data and providing insights
            to the business.""",
            verbose=True,
            llm = ollama_llama2_13b,
            tools=[search_tool],
            function_calling_llm=ollama_llama2_13b
        )

        task = Task(
          description='Find and summarize the latest AI news',
          expected_output='A bullet list summary of the top 5 most important AI news',
          agent=research_agent
        )

        crew = Crew(
            agents=[research_agent],
            tasks=[task],
            verbose=True
        )

        result = crew.kickoff()
        print(result)

Close issue

I applied what you input, and my local LLM is now working correctly it would seem. My problem previously is that serper would only engage a fraction of the time, whereas most of the time. I was getting the errors in the original complaint. Thank you!

github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 3 days ago

This issue was closed because it has been stalled for 5 days with no activity.