crewAIInc / crewAI-tools

MIT License
548 stars 172 forks source link

Use build-in tools with local LLMs models #7

Closed binhphamthanh closed 4 months ago

binhphamthanh commented 6 months ago

"_It seems we encountered an unexpected error while trying to use the tool. This was the error: 'OPENAI_APIKEY'" I got this message while trying to use tools. Can you please let me know how to switch into local LLM models.

maximinus commented 6 months ago

1: Ensure ollama is installed and running, and is version 0.1.27 or higher:

ps aux | grep ollama
ollama      1810 55.4  2.5 291658704 1663360 ?   Ssl  12:56  81:16 /usr/local/bin/ollama serve

ollama -v
ollama version is 0.1.27

2: Get the ollama model name

ollama list
NAME                            ID              SIZE    MODIFIED    
mistral:7b-instruct-v0.2-q8_0   3f321fd2a1c3    7.7 GB  5 days ago 

3: At top of your python code, before the crewai imports:

os.environ['OPENAI_API_BASE'] = 'http://localhost:11434/v1'
os.environ['OPENAI_MODEL_NAME'] = 'mistral:7b-instruct-v0.2-q8_0'
os.environ['OPENAI_API_KEY'] = 'NA'

This should work on crewai 0.19

hiddenkirby commented 6 months ago

1: Ensure ollama is installed and running, and is version 0.1.27 or higher:

ps aux | grep ollama
ollama      1810 55.4  2.5 291658704 1663360 ?   Ssl  12:56  81:16 /usr/local/bin/ollama serve

ollama -v
ollama version is 0.1.27

2: Get the ollama model name

ollama list
NAME                          ID              SIZE    MODIFIED    
mistral:7b-instruct-v0.2-q8_0 3f321fd2a1c3    7.7 GB  5 days ago 

3: At top of your python code, before the crewai imports:

os.environ['OPENAI_API_BASE'] = 'http://localhost:11434/v1'
os.environ['OPENAI_MODEL_NAME'] = 'mistral:7b-instruct-v0.2-q8_0'
os.environ['OPENAI_API_KEY'] = 'NA'

This should work on crewai 0.19

  1. Using the latest versions of crewai (0.22.5) and crewai-tools(0.0.16).
  2. running ollama run llama2 (llama2:latest)
  3. running the following code
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
from langchain_community.llms import Ollama

# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
os.environ["OPENAI_MODEL_NAME"] ='llama2:latest'  # Adjust based on available model
os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
llama2 = Ollama(model="llama2")

search_tool = SerperDevTool()

# Define your agents with roles and goals
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting actionable insights.""",
  verbose=True,
  allow_delegation=False,
  tools=[search_tool],
  llm=llama2
)
writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True,
  llm=llama2
)

# Create tasks for your agents
task1 = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.""",
  expected_output="Full analysis report in bullet points",
  agent=researcher
)

task2 = Task(
  description="""Using the insights provided, develop an engaging blog
  post that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.""",
  expected_output="Full blog post of at least 4 paragraphs",
  agent=writer
)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1, task2],
  verbose=2, # You can set it to 1 or 2 to different logging levels
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)

I get

(.venv) me@my-mbp-2 crewai_tests % python [crewai_test.py](http://crewai_test.py/)
 [DEBUG]: == Working Agent: Senior Research Analyst
 [INFO]: == Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.

> Entering new CrewAgentExecutor chain...
Thought: I understand the task at hand and will use the available tools to conduct a comprehensive analysis of the latest advancements in AI in 2024.

Action: Search the internet for "AI trends 2024"
Action Input: {"query": "AI trends 2024"} 

Action 'Search the internet for "AI trends 2024"' don't exist, these are the only available Actions: Search the internet: Search the internet(search_query: 'string') - A tool that can be used to semantic search a query from a txt's content.

Thought: I understand the task at hand and will use the available tools to conduct a comprehensive analysis of the latest advancements in AI in 2024.

Action: Search the internet for "AI trends 2024"
Action Input: {"query": "AI trends 2024"} 

Action 'Search the internet for "AI trends 2024"' don't exist, these are the only available Actions: Search the internet: Search the internet(search_query: 'string') - A tool that can be used to semantic search a query from a txt's content.

It's not understanding how to use tools... it seems.

If i:

  1. comment out
    os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
    os.environ["OPENAI_MODEL_NAME"] ='llama2:latest'  # Adjust based on available model
    os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
    llama2 = Ollama(model="llama2")
  2. set my correct OPENAI_API_KEY
  3. comment out llm=llama2 on both Agents

The search function (and subsequently the script) will work. If I remove the search_tool from the configuration, interacting with the local model does work.

lhermoso commented 5 months ago

When using local models you must set OPENAI_API_KEY=NA

GregHilston commented 5 months ago

@lhermoso I haven't seen this:

When using local models you must set OPENAI_API_KEY=NA

Documented anywhere, nor have I been able to find this in the code anywhere. Can you explain where you found this and how you knew to do this?

For context, doing this has not improved my situation of running locally either..

lhermoso commented 5 months ago

@lhermoso I haven't seen this:

When using local models you must set OPENAI_API_KEY=NA

Documented anywhere, nor have I been able to find this in the code anywhere. Can you explain where you found this and how you knew to do this?

For context, doing this has not improved my situation of running locally either..

If you go to https://docs.crewai.com/how-to/LLM-Connections/#from-huggingfacehub-endpoint you will find this:

image

GregHilston commented 5 months ago

@lhermoso

Ah, the Ollama integration section is what I needed, as I'm using Ollama.

Thanks for bringing this part of the docs to my attention :)

joaomdmoura commented 4 months ago

We also need better docs, sorry about that, it's on our radas