crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
19.34k stars 2.68k forks source link

Example is incomplete #20

Closed iplayfast closed 8 months ago

iplayfast commented 8 months ago

I set up your Readme example with Ollama and things seemed to work. But it was looking for a 'latest trends' tool which isn't available.

python crew.py 

Working Agent: Researcher
Starting Task: Investigate the latest AI trends ...

> Entering new AgentExecutor chain...

Thought: Do I need to use a tool? Yes
Action: Use the "Latest Trends" tool to gather information on the latest AI trends.
Action Input: NoneUse the "Latest Trends" tool to gather information on the latest AI trends. is not a valid tool, try one of [].
Thought: Do I need to use a tool? No
Final Answer: The "Latest Trends" tool is not available. You can investigate the latest AI trends by researching online, reading industry publications, and attending conferences and webinars related to AI.

> Finished chain.
Task output: The "Latest Trends" tool is not available. You can investigate the latest AI trends by researching online, reading industry publications, and attending conferences and webinars related to AI.

Working Agent: Writer
Starting Task: Write a blog post on AI advancements ...

> Entering new AgentExecutor chain...

Thought: Do I need to use a tool? Yes
Action: Use the "Latest Trends" tool to find out about the latest AI trends
Action Input: The Latest Trends toolUse the "Latest Trends" tool to find out about the latest AI trends is not a valid tool, try one of [].
Thought: Do I need to use a tool? No
Final Answer: There is no Latest Trends tool available, you can research the latest AI trends by reading industry publications, attending conferences and webinars related to AI, or using other tools such as Google Trends or social media monitoring tools.

> Finished chain.
Task output: There is no Latest Trends tool available, you can research the latest AI trends by reading industry publications, attending conferences and webinars related to AI, or using other tools such as Google Trends or social media monitoring tools.

The example doesn't talk about this so I'm not sure where to go now.

iplayfast commented 8 months ago

This is my attempt at creating your example with Ollama instead of openai. I've defined latest_trends as a tool and imported duckduckgo_search but it still seems borked. I'm stuck as I don't know what's wrong (and the doc ai isn't helping).

import os
from crewai import Agent, Task, Crew, Process
from langchain.llms import Ollama

import duckduckgo_search
from langchain.tools import tool
import requests
from bs4 import BeautifulSoup

@tool
def latest_trends(topic: str) -> str:
    """
    Search news.google.com for trends on a given topic.
    This function takes a topic as a string, queries news.google.com for trends related to this topic,
    and returns the trends found as a string.
    """
    print(f"latest_trends tool called with topic: {topic}")  # Debugging statement
    # [Rest of the tool code]

    # Format the topic for URL encoding
    formatted_topic = topic.replace(" ", "%20")

    # Construct the search URL
    url = f"https://news.google.com/search?q=Trends%20on%20{formatted_topic}"

    # Perform the web request
    response = requests.get(url)

    # Check if the request was successful
    if response.status_code != 200:
        return "Failed to retrieve data from news.google.com"

    # Parse the HTML content
    soup = BeautifulSoup(response.text, 'html.parser')

    # Extract trend data
    # The following is a placeholder; you'll need to adjust it based on the page structure
    trends = []
    for article in soup.find_all("article"):
        title = article.find("h3").get_text() if article.find("h3") else "No title"
        summary = article.find("p").get_text() if article.find("p") else "No summary"
        trends.append(f"Title: {title}\nSummary: {summary}\n")

    # Format the extracted trends
    trends_output = "\n".join(trends) if trends else "No trends found for the topic."

    return trends_output

ollama_mistral = Ollama(model="mistral")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:

# Define your agents with roles and goals
researcher = Agent(
  role='Researcher',
  goal='Discover new insights',
  backstory="You're a world class researcher working on a major data science company",
  verbose=True,
  allow_delegation=False,
  llm=ollama_mistral, # Ollama model passed here
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
  tools=[latest_trends,duckduckgo_search],
)
writer = Agent(
  role='Writer',
  goal='Create engaging content',
  backstory="You're a famous technical writer, specialized on writing data related content",
  verbose=True,
  allow_delegation=False,
  llm=ollama_mistral, # Ollama model passed here
)

# Create tasks for your agents
task1 = Task(description='Investigate the latest AI trends', agent=researcher)
task2 = Task(description='Write a blog post on AI advancements', agent=writer)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1, task2],
  verbose=True, # Crew verbose more will let you know what tasks are being worked on
  process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)

# Get your crew to work!
result = crew.kickoff()
fxtoofaan commented 8 months ago

is there possibly an existing langchain latest trends agent on langchain website or templates? I think these are langchian agents.

greysonlalonde commented 8 months ago

@iplayfast I was able to run with:

import json
import os

import requests
from langchain.tools import tool
from crewai import Agent, Task, Crew, Process
from langchain.llms import Ollama

@tool("Search the internet")
def latest_trends(query):
    top_result_to_return = 4
    url = "https://google.serper.dev/search"
    payload = json.dumps({"q": query})
    headers = {
        "X-API-KEY": os.environ["SERPER_API_KEY"],
        "content-type": "application/json",
    }
    response = requests.request("POST", url, headers=headers, data=payload)
    if "organic" not in response.json():
        return "Sorry, I couldn't find anything about that, there could be an error with you serper api key."
    else:
        results = response.json()["organic"]
        string = []
        for result in results[:top_result_to_return]:
            try:
                string.append(
                    "\n".join(
                        [
                            f"Title: {result['title']}",
                            f"Link: {result['link']}",
                            f"Snippet: {result['snippet']}",
                            "\n-----------------",
                        ]
                    )
                )
            except KeyError:
                pass

        return "\n".join(string)

ollama_mistral = Ollama(model="mistral")

researcher = Agent(
    role="Researcher",
    goal="Discover new insights",
    backstory="You're a world class researcher working on a major data science company",
    verbose=True,
    allow_delegation=False,
    llm=ollama_mistral,
    tools=[latest_trends],
)
writer = Agent(
    role="Writer",
    goal="Create engaging content",
    backstory="You're a famous technical writer, specialized on writing data related content",
    verbose=True,
    allow_delegation=False,
    llm=ollama_mistral,
)

task1 = Task(description="Investigate the latest AI trends", agent=researcher)
task2 = Task(description="Write a blog post on AI advancements", agent=writer)

crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose=True,
    process=Process.sequential,
)

result = crew.kickoff()
joaomdmoura commented 8 months ago

I had to modify @greysonlalonde example just a bit but it also works for me, here is a gist with the file and the log. Also I just cut a new version (0.1.14) that should make it easier for models to adhere to the expected format, not perfect yet but a move on the right direction.

I also update the readme with a better task description.

iplayfast commented 8 months ago

@joaomdmoura I got your version to run but it gagged on "X-API-KEY": os.environ["SERPER_API_KEY"], I'd really like to not have to use keys for every little thing I do on the Internet. is there a way around this?

iplayfast commented 8 months ago

The original readme code is

from langchain.llms import Ollama
ollama_openhermes = Ollama(model="agent")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:

local_expert = Agent(
  role='Local Expert at this city',
  goal='Provide the BEST insights about the selected city',
  backstory="""A knowledgeable local guide with extensive information
  about the city, it's attractions and customs""",
  tools=[
    SearchTools.search_internet,
    BrowserTools.scrape_and_summarize_website,
  ],
  llm=ollama_openhermes, # Ollama model passed here
  verbose=True
)

It talks about SearchTools and BrowserTools, where are those?

iplayfast commented 8 months ago

I've had some luck adding this as a tool.


@tool("Search the internet")
def latest_trends(query):
    """Searches the internet for a given topic and returns relevant results."""
    url = f"https://www.google.com/search?q={query}"
    #response = requests.get(url)
    # Create a list to store the results
    results = []
    for j in search(query, tld="co.in", num=10, stop=10, pause=2):
        results.append(j)
    return results
greysonlalonde commented 8 months ago

@iplayfast I'm glad that worked for you, although I would recommend using an API token if you need regular / reliable web access.

joaomdmoura commented 8 months ago

@iplayfast that README example is more of a north start, you can get fully working examples here for each of those examples you will find a tools folder that has the tools, including SearchTools.search_internet, BrowserTools.scrape_and_summarize_website.

BrowserTools in this case use browserless.io SearchTools in this case use serper.dev

Because those tools use external APIs to do some work, it's necessary to use API Keys to interact with those, they are separate products being built by other people and no directly related to crewAI, are just examples of how one could build any tool to integrate with any system.

I do hear that is a demand for ready-tools to be used tho, so maybe I'll start a separate package for that or contribute back to langchain integrations, but most likely the majority of those tools would still need tokens themselves for each system they interact with.

joaomdmoura commented 8 months ago

I'll work on updating the README example to make it self contained, easy to use and replicable

PiotrEsse commented 8 months ago

Because those tools use external APIs to do some work, it's necessary to use API Keys to interact with those, they are separate products being built by other people and no directly related to crewAI, are just examples of how one could build any tool to integrate with any system.

Its possible to use different service than browseless as they drop free tier now (I cant find any information about free tier now and I dont want to be charged 200usd just for testing tools)

Adamchanadam commented 8 months ago

Because those tools use external APIs to do some work, it's necessary to use API Keys to interact with those, they are separate products being built by other people and no directly related to crewAI, are just examples of how one could build any tool to integrate with any system.

Its possible to use different service than browseless as they drop free tier now (I cant find any information about free tier now and I dont want to be charged 200usd just for testing tools)

I faced the same issue and find out another option : I'm using ' ScrapingAnt' (they offer 10,000 free credit , it's around 1,000 request) to replace browseless , in order to scrape and summarize a website content.

first of all , install the package : 'pip install scrapingant-client' . I put my SCRAPINGANT_API_KEY in to os.environ.get() .

Update the class BrowserTools() by modifying 'browser_tools.py' in the folder of your examples project as below :

import os
import json
from scrapingant_client import ScrapingAntClient
from crewai import Agent, Task
from langchain.tools import tool
from unstructured.partition.html import partition_html

class BrowserTools():

    @tool("Scrape website content")
    def scrape_and_summarize_website(website):
        """Useful to scrape and summarize a website content using ScrapingAnt"""
        scrapingant_api_token = os.environ.get('SCRAPINGANT_API_KEY')
        if not scrapingant_api_token:
            raise ValueError("ScrapingAnt API key is not set in environment variables.")

        client = ScrapingAntClient(token=scrapingant_api_token)

        # Scrape the website content using ScrapingAnt
        response = client.general_request(website)
        elements = partition_html(text=response.content)
        content = "\n\n".join([str(el) for el in elements])
        content = [content[i:i + 8000] for i in range(0, len(content), 8000)]

        summaries = []
        for chunk in content:
            agent = Agent(
                role='Principal Researcher',
                goal='Do amazing researches and summaries based on the content you are working with',
                backstory="You're a Principal Researcher at a big company and you need to do a research about a given topic.",
                allow_delegation=False)
            task = Task(
                agent=agent,
                description=f'Analyze and summarize the content below, make sure to include the most relevant information in the summary, return only the summary nothing else.\n\nCONTENT\n----------\n{chunk}'
            )
            summary = task.execute()
            summaries.append(summary)

        return "\n\n".join(summaries)

then , run the python may works like the same.