crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.47k stars 2.54k forks source link

Add Google Gemini API Support #105

Closed mindwellsolutions closed 2 weeks ago

mindwellsolutions commented 7 months ago

Since Google Gemini Pro API is currently free when using up to 60 API calls per minute. This would be an incredibly helpful integration to add support for Gemini API into the CrewAI code. This will perform better than GPT 3.5 without any API fees. However, CrewAI must be able to simply allow users to define an API call/Functions Limit, so they can set it to 59 API processes per minute to avoid going over the free tier use per minute limits.

janda-datascience commented 7 months ago

import os from crewai import Agent, Task, Crew, Process from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.1,google_api_key="GEMINI-API-KEY")

Define your agents with roles and goals

researcher = Agent( role='Senior Research Analyst', goal='Uncover cutting-edge developments in AI and data science', backstory="""You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, llm = llm, allow_delegation=False, tools=[], )

writer = Agent( role='Tech Content Strategist', goal='Craft compelling content on tech advancements', backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.""", verbose=True, allow_delegation=True, llm = llm, tools=[], )

task1 = Task( description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024. Identify key trends, breakthrough technologies, and potential industry impacts. Your final answer MUST be a full analysis report""", agent=researcher )

task2 = Task( description="""Using the insights provided, develop an engaging blog post that highlights the most significant AI advancements. Your post should be informative yet accessible, catering to a tech-savvy audience. Make it sound cool, avoid complex words so it doesn't sound like AI. Your final answer MUST be the full blog post of at least 4 paragraphs.""", agent=writer )

Instantiate your crew with a sequential process

crew = Crew( agents=[researcher, writer], tasks=[task1, task2], verbose=2, )

Get your crew to work!

result = crew.kickoff()

print("######################") print(result)

Thank you for the update

mindwellsolutions commented 7 months ago

@janda-datascience Thank you so much! Really excited to get CrewAI working with Google Gemini. I ran into one issue with your code, it appers /crewai/agent.py is still looking for openAI and its key for its functions and won't run without an OpenAI key with the error: "Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it".

I assume settings need to be changed in /crewai/agent.py to point to the Gemini Pro API instead of OpenAI for all agent functions, so it doesn't bill the OpenAI API? It looks like this is the section, could you please help provide the proper updates to agent.py . In contrast, If I just add the OpenAI API, it start running agent tasks and billing me on openai api. Really appreciate your help.

agent.py section to modify? (Lines 57-60 in agent.py)

llm: Optional[Any] = Field( default_factory=lambda: ChatOpenAI( temperature=0.7, model_name="gpt-4",

Error: agent.py calling for OpenAI still: `Traceback (most recent call last): File "/home/aivirtual/apps/CrewAIGoogleGemini/CrewAI-TEST-2.py", line 41, in researcher = Agent( File "/home/aivirtual/miniconda3/envs/crewai/lib/python3.10/site-packages/pydantic/main.py", line 164, in init pydantic_self.pydantic_validator.validate_python(data, self_instance=__pydantic_self__) File "/home/aivirtual/miniconda3/envs/crewai/lib/python3.10/site-packages/crewai/agent.py", line 57, in default_factory=lambda: ChatOpenAI( File "/home/aivirtual/miniconda3/envs/crewai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 171, in warn_if_direct_instance return wrapped(self, *args, **kwargs)

File "/home/aivirtual/miniconda3/envs/crewai/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 107, in init super().init(**kwargs) File "/home/aivirtual/miniconda3/envs/crewai/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in init raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI root Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)`

punitchauhan771 commented 7 months ago

@mindwellsolutions by default the agent uses gpt-4, if you want to use your gemini model you can provide the gemini llm inside the agent definition:

eg:

from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process

#this is gemini llm
llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.1)

#using gemini llm inside agent
SQLdev = Agent(
  role='SQL Developer ',
  goal='Give best Possible solution along with code',
  backstory="You are experienced sql developer who always provide optimized query",
  verbose=True,
  allow_delegation=False,
  llm = llm,  #using google gemini
  tools=[
        BrowserTools.scrape_and_summarize_website,
        SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)
mindwellsolutions commented 7 months ago

@punitchauhan771 Thank you so much, that worked! It successfully runs CrewAI completely free with Gemini Pro API! Going to spend some time testing it out. Really appreciate the help here. I had to add one small update to the solution punitchauhan provided to get it working, thought I'd share it with everyone. You need to define the google_api_key directly within the llm = line of code.

(Working Update to llm = code for gemini api key) llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.1,google_api_key="<Enter Google Gemini API KEY>")

Last Question: Google Gemini API (Free) has a limit of 60 API uses per minute. Is there a way to throttle the agent activity, so it doesn't perform more than 59 API requests per minute, so we can avoid going over free limits when dispatching large agent swarms? Really appreciate everything, ty.

punitchauhan771 commented 7 months ago

Hi @mindwellsolutions, I don't think there is a way to throttle the agent activity, However if you want to know how many request did your agent made you can use langchain module example code:

import os
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

# Define your agents with roles and goals
CopyWriter = Agent(
  role='Copy Writer',
  goal='To write the best article',
  backstory="You're an experienced Copy writer who writes Technical articles",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website,
        SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)

SEO_Researcher = Agent(
  role='SEO analyst',
  goal='To Give the best seo based analyst tags',
  backstory="You're an experienced seo analyst who Give the best seo tags",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website,
        SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)
# Define your agents with roles and goals
researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related URLs ,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website,
        SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)

# Create tasks for your agents
task1 = Task(description='reseach on crew ai, reference url = https://github.com/joaomdmoura/crewAI', agent=researcher)
task2 = Task(description=f'Create an article on langchain agents and tools and also give an example and also write a detail summary on the basis of {researcher} response', agent = CopyWriter)
task3 = Task(description=f'give me best tags for the article written by {CopyWriter} Agent', agent=SEO_Researcher)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher,CopyWriter,SEO_Researcher],
  tasks=[task1,task2,task3],
  verbose=2, # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
  process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)

# result = crew.kickoff()

# Get your crew to work!
with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)

at the end of the agent activity it will give you some response like :

  Tokens Used: 0
    Prompt Tokens: 0
    Completion Tokens: 0
Successful Requests: 11
Total Cost (USD): $0.0

hope this helps.

mindwellsolutions commented 7 months ago

@punitchauhan771 Thank you and everyone for the help, such an amazing community. This would help a lot, but I'm using Google Gemini API rather than OpenAI with from langchain_google_genai import ChatGoogleGenerativeAI.

Is there a callback for ChatGoogleGenerativeAI similar to the OpenAI callback you posited "from langchain.callbacks import get_openai_callback". Really appreciate the help.

punitchauhan771 commented 7 months ago

Hi @mindwellsolutions the solution I provided works for Gemini as well,though it doesn't count tokens 🙂.

mindwellsolutions commented 7 months ago

@punitchauhan771 Thanks again. Unfortunately using your solution after 2 agents complete all their tasks successfully with Gemini API its still showing "Successful Request: 0". As you mentioned simply being able to monitor the amount of Successful Requests is all Gemini Pro (Free Tier) needs since there are no costs to track associated with token usage :)

The Output I get after Gemini clearly performs multiple tasks: Tokens Used: 0 Prompt Tokens: 0 Completion Tokens: 0 Successful Requests: 0 Total Cost (USD): $0.0

punitchauhan771 commented 7 months ago

@mindwellsolutions if possible can you provide me the code snippet? Because I just ran the code and this is the response I got while I used Gemini.


Tokens Used: 0
  Prompt Tokens: 0
  Completion Tokens: 0
Successful Requests: 11
Total Cost (USD): $0.0
mindwellsolutions commented 7 months ago

@punitchauhan771 Actually. I closed the terminal window in VS Code and ran it again and it worked perfect. Really appreciate all your help, you've gotten everything running perfectly for me. Ty

edisonzf2020 commented 7 months ago

The performance of the Gemini Pro model using tools is not good. There are big problems with the logic and results of using the tools, and the final results are often hallucinatory.

mindwellsolutions commented 7 months ago

@edisonzf2020 Thanks for your comment. I got time to test out gemini api in crewai further over the weekend and as you mentioned it seems to be having issues using tools like duckduckgosearch. I tested openai API and Ollama (zephyr) and those both worked perfectly, while it looks like Gemini is pulling responses from its internal knowledgebase rather than relaying the data from DuckDuckGo.

Are there any possible fixes moving forward to get Gemini to play well with CrewAI? The scalability of using a free API for personal use will provide incredible power of scalability without having to run local models on local GPU resources.

joaomdmoura commented 7 months ago

Hey folks, catching up to this issue! great comments, so glad you were able to get Gemini working, we are adding new docs that will have instructions for all the major models so stay tuned for that.

I'll do some testing with Gemini models specifically to see how we could make that better!

mindwellsolutions commented 7 months ago

@joaomdmoura Thank you. This is incredibly appreciated. We realized that Gemini API has significant problems currently using CrewAI's tools like DuckDuckGoSearch. It appears none of the data from the tools make it back to Gemini, and Gemini always generates answers from its internal knowledge base rather than the research done by the tool. If there is a way to fix this so Gemini works like other LLMs that would be amazing.

I've been using Zephyr 7b as a local model and that runs well, but the power of decent size scalability of swarms that would require significant power to run locally, can be run from any device using Gemini's free API - which is going to be extremely valuable. Especially if each Agent is assigned it's own Free Gemini API key in concert with each other, the ability to built large API swarms for free will be significant.

Thanks for all that you do. Really loving the functionality of CrewAI.

punitchauhan771 commented 7 months ago

@edisonzf2020 Thanks for your comment. I got time to test out gemini api in crewai further over the weekend and as you mentioned it seems to be having issues using tools like duckduckgosearch. I tested openai API and Ollama (zephyr) and those both worked perfectly, while it looks like Gemini is pulling responses from its internal knowledgebase rather than relaying the data from DuckDuckGo.

Are there any possible fixes moving forward to get Gemini to play well with CrewAI? The scalability of using a free API for personal use will provide incredible power of scalability without having to run local models on local GPU resources.

Hi @mindwellsolutions, I’m uncertain if it’s sourcing answers from its internal knowledge base . For reference, I’ve attempted the same inquiry using the Gemini LLM, Agent, and CrewAI Agent. If it’s utilizing its internal knowledge base, then it shouldn’t need tools like search, right? However, when I tried using the CrewAI Agent without any search tool, it provided me with this response. code:

researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related urls,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm)

  task1 = Task(description='research on crewai in context of llm agents', agent=researcher)
  crew.kickoff()

response:

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Google Search
Action Input: crewai in context of llm agentsGoogle Search is not a valid tool, try one of [].Do I need to use a tool? No
Final Answer: I am sorry, I do not have access to the internet to perform a Google search on crewai in the context of LLM agents.

> Finished chain.

[DEBUG]: [Researcher] Task output: I am sorry, I do not have access to the internet to perform a Google search on crewai in the context of LLM agents.

I am sorry, I do not have access to the internet to perform a Google search on crewai in the context of LLM agents.

when i tried google search the response was:

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Search the internet
Action Input: crewai in context of llm agents{'searchParameters': {'q': 'crewai in context of llm agents', 'type': 'search', 'engine': 'google'}, 'organic': [{'title': 'CrewAI: A Team of AI Agents that Work Together for You - Medium', 'link': 'https://medium.com/@mayaakim/crewai-a-team-of-ai-agents-that-work-together-for-you-4cc9d24e0857', 'snippet': 'In this really cool intro to LLMs, Andrej, one of the top engineers at OpenAI references a book by Daniel Kahneman “Thinking Fast and Slow”.', 'date': 'Jan 5, 2024', 'position': 1}, {'title': 'CrewAI Unleashed: Future of AI Agent Teams - LangChain Blog', 'link': 'https://blog.langchain.dev/crewai-unleashed-future-of-ai-agent-teams/', 'snippet': 'AI Agents Crews are game-changing. AI agents are emerging as game-changers, quickly becoming partners in problem-solving, creativity, and ...', 'date': 'Dec 21, 2023', 'position': 2}, {'title': 'CrewAI - Your Own Team of Autonomous Agents - YouTube', 'link': 'https://www.youtube.com/watch?v=B8s-FyN4UeE', 'snippet': 'CrewAI is an autonomous Agents framework that enable you to build a crew of agents to ...', 'date': 'Jan 13, 2024', 'attributes': {'Duration': '10:32', 'Posted': 'Jan 13, 2024'}, 'imageUrl': 'https://i.ytimg.com/vi/B8s-FyN4UeE/default.jpg?sqp=-oaymwEECHgQQw&rs=AMzJL3k9fjeL4ynDy_DtrDvC3Edp-SJ5dQ', 'position': 3}, {'title': 'Crew AI | Unleashing the Potential of Multi-Agent Interactions', 'link': 'https://medium.com/@rajib76.gcp/crew-ai-unleashing-the-potential-of-multi-agent-interactions-d57ce8b00fbc', 'snippet': 'This method promotes a modular and complementary amalgamation of the extensive skills provided by LLMs.', 'date': '8 days ago', 'position': 4}, {'title': 'CrewAI: Framework For Creating Autonomous AI Agents - YouTube', 'link': 'https://www.youtube.com/watch?v=Kbq9m-x7gYU', 'snippet': 'In this in-depth tutorial, we dive into the revolutionary CrewAI Framework, focusing on the game ...', 'date': 'Jan 10, 2024', 'attributes': {'Duration': '11:57', 'Posted': 'Jan 10, 2024'}, 'imageUrl': 'https://i.ytimg.com/vi/Kbq9m-x7gYU/default.jpg?sqp=-oaymwEECHgQQw&rs=AMzJL3lQf_a4mE2Yf14VahaxKqkL8MMbDg', 'position': 5}, {'title': 'CrewAi + Solor/Hermes + Langchain + Ollama = Super Ai Agent', 'link': 'https://pub.towardsai.net/crewai-solor-hermes-langchain-ollama-super-ai-agent-0ee348404428', 'snippet': '... explanations; LLM: This stands for “large language model” and in this case, ollama_openhermes is passed as the model for the agent to use.', 'date': '8 days ago', 'position': 6}, {'title': 'Company Spotlight: CrewAI - Replit — Blog', 'link': 'https://blog.replit.com/crew-ai', 'snippet': 'AI agents are here to stay Large Language Models (LLMs) are everywhere, doing various jobs, from chatting to parsing documents.', 'date': 'Dec 21, 2023', 'position': 7}, {'title': 'CrewAI : How To Build AI Agent Teams', 'link': 'https://gptpluginz.com/crewai-how-to-build-ai-agent-teams/', 'snippet': 'Orchestrating role-playing, autonomous AI agents for complex tasks. Development of LLM applications using conversable, customizable agents.', 'date': 'Jan 6, 2024', 'position': 8}, {'title': 'CrewAI: AI-Powered Blogging Agents using LM Studio, Ollama ...', 'link': 'https://www.youtube.com/watch?v=fnchsJd9pfE', 'snippet': "Welcome to an exciting journey into the world of AI-powered blogging! In today's video, I take ...", 'date': '6 days ago', 'attributes': {'Duration': '8:14', 'Posted': '6 days ago'}, 'imageUrl': 'https://i.ytimg.com/vi/fnchsJd9pfE/default.jpg?sqp=-oaymwEECHgQQw&rs=AMzJL3lTSxNEYaT9UykAOakyZkYNY50VUg', 'position': 9}, {'title': 'CrewAI agent framework with local models : r/LocalLLaMA - Reddit', 'link': 'https://www.reddit.com/r/LocalLLaMA/comments/18v527r/crewai_agent_framework_with_local_models/', 'snippet': 'Basically, all the agentic software I want to develop needs tools. I tried connecting to Ollama via LiteLLM (as an OpenAI proxy) which also ...', 'date': 'Dec 31, 2023', 'position': 10}], 'peopleAlsoAsk': [{'question': 'What are different types of agents in AI?', 'snippet': "AGENTS IN ARTIFICIAL INTELLIGENCE CAN BE CATEGORIZED INTO DIFFERENT TYPES BASED ON HOW AGENT'S ACTIONS AFFECT THEIR PERCEIVED INTELLIGENCE AND CAPABILITIES, SUCH AS:\nSimple reflex agents.\nModel-based agents.\nGoal-based agents.\nUtility-based agents.\nLearning agents.\nHierarchical agents.", 'title': '6 Types of AI Agents: Exploring the Future of Intelligent Machines', 'link': 'https://www.simform.com/blog/types-of-ai-agents/'}, {'question': 'What is a model-based reflex agent in AI?', 'snippet': "A model-based reflex agent is one that uses internal memory and a percept history to create a model of the environment in which it's operating and make decisions based on that model. The term percept means something that has been observed or detected by the agent.", 'title': 'Agent-Based Modeling | Process & Examples - Video & Lesson Transcript', 'link': 'https://study.com/academy/lesson/model-based-agents-definition-interactions-examples.html'}, {'question': 'What is a goal-based agent in AI?', 'snippet': 'A goal-based agent is an AI system designed to achieve a specific goal. The goal can be anything from navigating a maze to playing a game. Given a plan, a goal-based agent attempts to choose the best strategy to achieve it based on the environment.', 'title': 'Difference Between Goal-based and Utility-based Agents - Baeldung', 'link': 'https://www.baeldung.com/cs/goal-based-vs-utility-based-agents'}, {'question': 'What is learning agent in artificial intelligence?', 'snippet': 'A learning agent is a tool in AI that is capable of learning from its experiences. It starts with some basic knowledge and is then able to act and adapt autonomously, through learning, to improve its own performance.', 'title': 'Learning Agents: Definition, Components & Examples - Study.com', 'link': 'https://study.com/academy/lesson/learning-agents-definition-components-examples.html'}], 'relatedSearches': [{'query': 'Crewai in context of llm agents examples'}]}
Title: CrewAI: A Team of AI Agents that Work Together for You - Medium
Link: https://medium.com/@mayaakim/crewai-a-team-of-ai-agents-that-work-together-for-you-4cc9d24e0857
Snippet: In this really cool intro to LLMs, Andrej, one of the top engineers at OpenAI references a book by Daniel Kahneman “Thinking Fast and Slow”.

-----------------
Title: CrewAI Unleashed: Future of AI Agent Teams - LangChain Blog
Link: https://blog.langchain.dev/crewai-unleashed-future-of-ai-agent-teams/
Snippet: AI Agents Crews are game-changing. AI agents are emerging as game-changers, quickly becoming partners in problem-solving, creativity, and ...

-----------------
Title: CrewAI - Your Own Team of Autonomous Agents - YouTube
Link: https://www.youtube.com/watch?v=B8s-FyN4UeE
Snippet: CrewAI is an autonomous Agents framework that enable you to build a crew of agents to ...

-----------------
Title: Crew AI | Unleashing the Potential of Multi-Agent Interactions
Link: https://medium.com/@rajib76.gcp/crew-ai-unleashing-the-potential-of-multi-agent-interactions-d57ce8b00fbc
Snippet: This method promotes a modular and complementary amalgamation of the extensive skills provided by LLMs.

-----------------Do I need to use a tool? No
Final Answer: CrewAI is a platform that allows users to create and manage teams of AI agents. These agents can be used to automate tasks, generate content, and provide customer service. CrewAI is powered by large language models (LLMs), which are a type of AI that can understand and generate human language. LLMs are trained on massive datasets of text and code, which allows them to learn to perform a wide variety of tasks.

CrewAI's agents are designed to work together as a team, which allows them to solve problems that would be difficult for a single agent to solve. For example, a team of agents could be used to develop a marketing campaign, write a blog post, or create a customer service chatbot.

CrewAI is a powerful tool that can be used to automate tasks, generate content, and provide customer service. It is a valuable resource for businesses and individuals who want to use AI to improve their productivity and efficiency.

> Finished chain.

[DEBUG]: [Researcher] Task output: CrewAI is a platform that allows users to create and manage teams of AI agents. These agents can be used to automate tasks, generate content, and provide customer service. CrewAI is powered by large language models (LLMs), which are a type of AI that can understand and generate human language. LLMs are trained on massive datasets of text and code, which allows them to learn to perform a wide variety of tasks.

CrewAI's agents are designed to work together as a team, which allows them to solve problems that would be difficult for a single agent to solve. For example, a team of agents could be used to develop a marketing campaign, write a blog post, or create a customer service chatbot.

CrewAI is a powerful tool that can be used to automate tasks, generate content, and provide customer service. It is a valuable resource for businesses and individuals who want to use AI to improve their productivity and efficiency.

CrewAI is a platform that allows users to create and manage teams of AI agents. These agents can be used to automate tasks, generate content, and provide customer service. CrewAI is powered by large language models (LLMs), which are a type of AI that can understand and generate human language. LLMs are trained on massive datasets of text and code, which allows them to learn to perform a wide variety of tasks.\n\nCrewAI's agents are designed to work together as a team, which allows them to solve problems that would be difficult for a single agent to solve. For example, a team of agents could be used to develop a marketing campaign, write a blog post, or create a customer service chatbot.\n\nCrewAI is a powerful tool that can be used to automate tasks, generate content, and provide customer service. It is a valuable resource for businesses and individuals who want to use AI to improve their productivity and efficiency.

and when I tried duckduckgoSearch the response was:

[DEBUG]: Working Agent: Researcher

[INFO]: Starting Task: research on crewai in context of llm agents

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Scrape website content
Action Input: https://crew.ai/The content provided is an advertisement for the sale of the domain name "crew.ai" through Dan.com. It highlights the benefits of using their platform, including buyer protection, fast and easy transfers, and hassle-free payments. The content does not contain any relevant information that can be summarized.Do I need to use a tool? Yes
Action: duckduckgo_search
Action Input: crewai llm agentsCrewAI is a Framework that will make easy for us to get Local AI Agents interacting between them. Using Crew AI Pre-Requisites - Get Docker! 👇 Follow the steps below to get CrewAI in a Docker Container to have all the dependencies contained. With this approach, we will get our Free AI Agents interacting between them locally. CrewAI's vision is clear, to allow engineers to harness the collective power of AI agents, moving beyond traditional automation, by bringing Agents together and allowing for streamlined decision-making, enhanced creativity, and solving complex challenges. Simplicity through Modular Design krazzmann CrewAI agent framework with local models Resources This is great news for everyone who wants to develop agentic software. After a lot of failure and disappointments with running Autogen with local models, I tried the rising star of agent frameworks, CrewAI. CrewAI is a library specifically designed to build and orchestrate groups of AI agents. It's made to be straightforward and modular, so integrating it into your projects is a breeze. Think of CrewAI like a set of building blocks - each piece is unique, but they're all designed to fit together smoothly. CrewAI is a new multi-agent framework built on top of LangChain to simplify LLM-based Agent development. In this blog post, I'll guide you through the essentials of using CrewAI to streamline complex workflows with Python. From setting up AI agents to managing tasks, you'll learn how CrewAI's versatility can enhance efficiency in various domains.Do I need to use a tool? No
Final Answer: CrewAI is a framework that simplifies the development and orchestration of groups of AI agents. It is designed to be straightforward and modular, making it easy to integrate into existing projects. CrewAI is built on top of LangChain, a platform for building and deploying language models. It provides a set of building blocks that can be used to create complex workflows with Python. CrewAI is particularly useful for tasks that require collaboration between multiple AI agents, such as decision-making, creativity, and problem-solving.

Here are some additional details about CrewAI:

* It is open-source and available on GitHub.
* It is compatible with a variety of AI models, including GPT-3, T5, and BERT.
* It provides a range of features for managing AI agents, including task assignment, communication, and monitoring.
* It has been used to develop a variety of applications, including chatbots, customer service agents, and game AI.

Overall, CrewAI is a powerful tool for developing and deploying AI agents. It is easy to use and can be used to create a wide range of applications.

> Finished chain.

[DEBUG]: [Researcher] Task output: CrewAI is a framework that simplifies the development and orchestration of groups of AI agents. It is designed to be straightforward and modular, making it easy to integrate into existing projects. CrewAI is built on top of LangChain, a platform for building and deploying language models. It provides a set of building blocks that can be used to create complex workflows with Python. CrewAI is particularly useful for tasks that require collaboration between multiple AI agents, such as decision-making, creativity, and problem-solving.

Here are some additional details about CrewAI:

* It is open-source and available on GitHub.
* It is compatible with a variety of AI models, including GPT-3, T5, and BERT.
* It provides a range of features for managing AI agents, including task assignment, communication, and monitoring.
* It has been used to develop a variety of applications, including chatbots, customer service agents, and game AI.

Overall, CrewAI is a powerful tool for developing and deploying AI agents. It is easy to use and can be used to create a wide range of applications.

CrewAI is a framework that simplifies the development and orchestration of groups of AI agents. It is designed to be straightforward and modular, making it easy to integrate into existing projects. CrewAI is built on top of LangChain, a platform for building and deploying language models. It provides a set of building blocks that can be used to create complex workflows with Python. CrewAI is particularly useful for tasks that require collaboration between multiple AI agents, such as decision-making, creativity, and problem-solving.\n\nHere are some additional details about CrewAI:\n\n* It is open-source and available on GitHub.\n* It is compatible with a variety of AI models, including GPT-3, T5, and BERT.\n* It provides a range of features for managing AI agents, including task assignment, communication, and monitoring.\n* It has been used to develop a variety of applications, including chatbots, customer service agents, and game AI.\n\nOverall, CrewAI is a powerful tool for developing

also when i tried the same using langchain agent, the response were same, but when i just used the basic llm, the response was:

AIMessage(content="CreAI is a powerful AI-powered writing assistant that helps you create high-quality content quickly and easily. It uses advanced natural language processing (NLP) and machine learning algorithms to understand your writing style and generate text that is both informative and engaging.\n\nWith CreAI, you can:\n\n* **Generate unique and original content:** CreAI can help you create unique and original content that is free of plagiarism. It uses a variety of sources to gather information and then generates text that is both accurate and interesting.\n* **Improve your writing style:** CreAI can help you improve your writing style by identifying common errors and suggesting improvements. It can also help you develop a more consistent and professional writing style.\n* **Save time:** CreAI can help you save time by generating content quickly and easily. This can free up your time to focus on other tasks, such as marketing and promotion.\n\nCreAI is a valuable tool for anyone who wants to create high-quality content quickly and easily. It is especially useful for businesses, marketers, and content creators who need to produce a lot of content on a regular basis.\n\nHere are some specific examples of how CreAI can be used:\n\n* **Blog posts:** CreAI can help you create blog posts that are informative, engaging, and SEO-friendly. It can also help you come up with new blog post ideas and generate outlines.\n* **Articles:** CreAI can help you write articles for websites, magazines, and newspapers. It can also help you research topics and find relevant sources.\n* **Social media posts:** CreAI can help you create social media posts that are engaging and shareable. It can also help you come up with new social media content ideas.\n* **Product descriptions:** CreAI can help you write product descriptions that are clear, concise, and persuasive. It can also help you highlight the benefits of your products and services.\n* **Email marketing campaigns:** CreAI can help you create email marketing campaigns that are effective and engaging. It can also help you write email subject lines that are likely to get opened.\n\nCreAI is a powerful tool that can help you create high-quality content quickly and easily. It is a valuable asset for anyone who wants to succeed in today's digital world.")

Also it doesn't allow questionable prompt as mentioned in the gemini doc: you will get response something like this:

I'm sorry, but this prompt involves a sensitive topic and I\'m not allowed to generate responses that are potentially harmful or inappropriate.
mindwellsolutions commented 7 months ago

@punitchauhan771 thanks for your detailed research. The test I did that leads me to believe its using internal knowledge, is when I ask it to only research information from the year 2024 for any topic with Google Gemini Pro Free with Duckduckgosearch I get the response "I am sorry, but I cannot find any information on CrewAI in the context of LLM agents specifically from the year 2024. My access to data is limited to information available up until April 2023. I recommend checking more up-to-date sources or reaching out to CrewAI directly for more information."

If I run the same CrewAI script using an OpenAI Key or Zephyr 7b as a local model, it does relay only 2024 research back using duckduckgo. Duckduckgosearch does run as an "Action:", so it appears there is still an issue with Gemini where it cannot relay the information found from DuckDuckgoSearch back to Gemini, and Gemini resorts to its internal knowledge up to April 2023.

Full Example of my Test: I used your exact Researcher Agent settings you posted above, but modified your task to mention only research from 2024 as the only change. "research on crewai in context of llm agents only from 2024!". You could do the same for "research new breakthroughs in AI llm agents only from 2024!" and it will show the april 2023 cutoff etc.. (See second code area below)

Working Agent: Researcher
Starting Task: research on crewai in context of llm agents only from 2024! 

Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: duckduckgo_search
Action Input: CrewAI LLM agents 2024An Overview of Why LLM Benchmarks Exist, How They Work, and What's Next LLMs are complex. ... CrewAi + Solor/Hermes + Langchain + Ollama = Super Ai Agent January 14, 2024. 
How To Understand OCR Quality To Optimize Performance January 14, 2024. Bridging the Gap: Integrating Data Science and Decision Science through Six Essential Questions ... What is CrewAI? Crew AI is a cutting-edge framework designed for orchestrating role-playing, autonomous AI agents, allowing these agents to collaborate and solve complex tasks efficiently. Key Features of CrewAi include: Role-based agent design: CrewAi allows you to customize artificial intelligence AI agents with specific roles, goals, and tools. CrewAI is a Framework that will make easy for us to get Local AI Agents interacting between them. Using Crew AI Pre-Requisites - Get Docker! 👇 Follow the steps below to get CrewAI in a Docker Container to have all the dependencies contained. With this approach, we will get our Free AI Agents interacting between them locally. CrewAI's vision is clear, to allow engineers to harness the collective power of AI agents, moving beyond traditional automation, by bringing Agents together and allowing for streamlined decision-making, enhanced creativity, and solving complex challenges. Simplicity through Modular Design LVM: Revolutionizing Vision AI Parallel to the development of Q* is the breakthrough in vision AI, marked by the introduction of Large Vision Models (LVM). A recent paper published on arxiv.org by researchers from the University of California, Berkeley (UCB), and Johns Hopkins University (JHU) details this advancement.Do I need to use a tool? No

**Final Answer: I am sorry, but I cannot find any information on CrewAI in the context of LLM agents specifically from the year 2024. My access to data is limited to information available up until April 2023. I recommend checking more up-to-date sources or reaching out to CrewAI directly for more information.**

2024 general AI breakthroughs only task = 'research on AI breakthroughs only from 2024!'

> Finished chain.
Task output: I'm sorry, but I cannot provide you with information about AI breakthroughs from 2024 as my knowledge is only up to April 2023 and I do not have access to real-time information or the ability to 
predict future events.
######################
I'm sorry, but I cannot provide you with information about AI breakthroughs from 2024 as my knowledge is only up to April 2023 and I do not have access to real-time information or the ability to predict future events.
punitchauhan771 commented 7 months ago

Hi @mindwellsolutions, Thank you for providing a detailed issue, I tried fetching the latest ces 2024 info and this is the response that I got. eg: when I used a webscraping tool Code:

import datetime
news_reporter = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website, #scraping tool
        # search_tool
        SearchTools.search_internet, #google search
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)

task1 = Task(description= 'give me a detailed tech report for ces 2024 hosted in las vegas.',agent = news_reporter)

response:

[DEBUG]: Working Agent: news_reporter

[INFO]: Starting Task: give me a detailed tech report for ces 2024 hosted in las vegas along with all the tools that we displayed (scrape medium articles if necessary), word limit : 150.

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Scrape website content
Action Input: https://www.engadget.com/ces-2024-highlights-day-1-230000002.htmlHere is a summary of the content:

- Apple released iOS 17.3 with a new Stolen Device Protection tool and a refreshed iPadOS.
- NASA reestablished contact with the Ingenuity Mars helicopter.
- Alphabet is cutting jobs at its X moonshot lab to make it easier to spin out projects into startups.
- Disney's A Real Bug's Life docu-series uses weird lenses and hand-made robots to deliver cinematic shots of insects.
- Apple dropped a mysterious trailer for its latest sci-fi series Constellation, starring Noomi Rapace.
- Apple might have sold up to 180,000 Vision Pro headsets over the pre-order weekend.
- NASA shared images of the full asteroid Bennu sample.
- Riot Games is laying off 11% of its workforce globally, impacting 530 people, and shutting down its publishing label Riot Forge.
- The SEC confirmed that its X account was taken over with a SIM swap attack, highlighting the importance of multi-factor authentication (MFA).
- Apple is reportedly considering rewarding artists for offering music in spatial audio, potentially leading to higher royalties.
- The Day Before, a $40 game, was shut down 46 days after launch due to negative reviews from gamers citing bugs, unoriginality, and slow performance.
- NVIDIA's RTX Remix tool is now available as a free open beta, allowing modders to add ray-tracing and AI-upscaled textures to older games.
- LoanDepot disclosed a data breach affecting 16 million customers due to a ransomware attack that slowed down the business for over a week.
- Carnegie Mellon University revealed a cyberattack over the summer that impacted about 7,000 students, employees, contractors, and others.
- Apparel supplier VF Corp admitted to a cyberattack that led to a data breach of 35 million customers, impacting holiday fulfillment.
- Korg Nu:Tekt DIY line introduced a new mini synth and a Kaoss Pad with an updated LogueSDK and expressive effects.
- WhatsApp may soon offer an AirDrop-like file sharing feature across nearby devices within the app.
- NASA shared images of the contents of the Bennu asteroid container, revealing rocks and dust.
- Meta is allowing users in the EU to uncouple Messenger and Marketplace accounts, complying with regulations in the Digital Markets Act.Do I need to use a tool? No
Final Answer: CES 2024, held in Las Vegas, showcased a plethora of innovative technologies and gadgets. Apple unveiled iOS 17.3 with enhanced security features and a refreshed iPadOS. NASA reconnected with the Ingenuity Mars helicopter, while Alphabet restructured its X moonshot lab to foster project spin-offs. Disney's A Real Bug's Life docu-series employed unique lenses and handmade robots for captivating insect footage. Apple teased its upcoming sci-fi series Constellation, starring Noomi Rapace. Reports suggest Apple's Vision Pro headset pre-orders reached up to 180,000 units. NASA shared images of the full asteroid Bennu sample. Riot Games downsized its workforce by 11%, affecting 530 employees, and closed its publishing label Riot Forge. The SEC experienced a SIM swap attack, highlighting the importance of multi-factor authentication. Apple considered rewarding artists for offering music in spatial audio, potentially increasing royalties. The Day Before, a $40 game, faced closure 46 days after launch due to negative reviews. NVIDIA's RTX Remix tool entered open beta, enabling modders to enhance older games with ray-tracing and AI-upscaled textures. LoanDepot and VF Corp disclosed data breaches affecting millions of customers due to cyberattacks. Korg Nu:Tekt DIY introduced a mini synth and an updated Kaoss Pad. WhatsApp hinted at an AirDrop-like file sharing feature. NASA revealed images of the Bennu asteroid container's contents. Meta complied with EU regulations by allowing users to separate Messenger and Marketplace accounts.

> Finished chain.

[DEBUG]: [news_reporter] Task output: CES 2024, held in Las Vegas, showcased a plethora of innovative technologies and gadgets. Apple unveiled iOS 17.3 with enhanced security features and a refreshed iPadOS. NASA reconnected with the Ingenuity Mars helicopter, while Alphabet restructured its X moonshot lab to foster project spin-offs. Disney's A Real Bug's Life docu-series employed unique lenses and handmade robots for captivating insect footage. Apple teased its upcoming sci-fi series Constellation, starring Noomi Rapace. Reports suggest Apple's Vision Pro headset pre-orders reached up to 180,000 units. NASA shared images of the full asteroid Bennu sample. Riot Games downsized its workforce by 11%, affecting 530 employees, and closed its publishing label Riot Forge. The SEC experienced a SIM swap attack, highlighting the importance of multi-factor authentication. Apple considered rewarding artists for offering music in spatial audio, potentially increasing royalties. The Day Before, a $40 game, faced closure 46 days after launch due to negative reviews. NVIDIA's RTX Remix tool entered open beta, enabling modders to enhance older games with ray-tracing and AI-upscaled textures. LoanDepot and VF Corp disclosed data breaches affecting millions of customers due to cyberattacks. Korg Nu:Tekt DIY introduced a mini synth and an updated Kaoss Pad. WhatsApp hinted at an AirDrop-like file sharing feature. NASA revealed images of the Bennu asteroid container's contents. Meta complied with EU regulations by allowing users to separate Messenger and Marketplace accounts.

CES 2024, held in Las Vegas, showcased a plethora of innovative technologies and gadgets. Apple unveiled iOS 17.3 with enhanced security features and a refreshed iPadOS. NASA reconnected with the Ingenuity Mars helicopter, while Alphabet restructured its X moonshot lab to foster project spin-offs. Disney's A Real Bug's Life docu-series employed unique lenses and handmade robots for captivating insect footage. Apple teased its upcoming sci-fi series Constellation, starring Noomi Rapace. Reports suggest Apple's Vision Pro headset pre-orders reached up to 180,000 units. NASA shared images of the full asteroid Bennu sample. Riot Games downsized its workforce by 11%, affecting 530 employees, and closed its publishing label Riot Forge. The SEC experienced a SIM swap attack, highlighting the importance of multi-factor authentication. Apple considered rewarding artists for offering music in spatial audio, potentially increasing royalties. The Day Before, a $40 game, faced closure 46 days after launch due to negative reviews. NVIDIA's RTX Remix tool entered open beta, enabling modders to enhance older games with ray-tracing and AI-upscaled textures. LoanDepot and VF Corp disclosed data breaches affecting millions of customers due to cyberattacks. Korg Nu:Tekt DIY introduced a mini synth and an updated Kaoss Pad. WhatsApp hinted at an AirDrop-like file sharing feature. NASA revealed images of the Bennu asteroid container's contents. Meta complied with EU regulations by allowing users to separate Messenger and Marketplace accounts.

and when I used duckduckgo search: code:

news_reporter = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)
task1 = Task(description= 'give me a detailed tech report for ces 2024 hosted in las vegas.',agent = news_reporter)

response:

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: duckduckgo_search
Action Input: CES 2024 Las VegasHardware CES 2024: Everything revealed so far, from Nvidia and Sony to the weirdest reveals and helpful AI Christine Hall @ christinemhall / 12:41 PM PST • January 12, 2024 Comment Image... Tech Events CES 2024: all the latest news and reviews from this year's huge tech event News By TechRadar Team Contributions from Mark Wilson, Axel Metz, Hamish Hector, Matt Hanson last updated... Great Minds Home Audio, Entertainment and Streaming | $$$ MEA: Connect2Car - The Electrifying Future of Mobility | PARTNER | $$$ Research Summit Space Tech | $$$ Vehicle Tech and Advanced Air Mobility | $$$ LVCC North Hall Continuing at the LVCC, North Hall will feature IoT, AI and robotics, smart cities and digital health. CES 2024: AI everything, what we expect in Las Vegas and all the announcements so far CES 2024: AI everything, what we expect in Las Vegas and all the announcements so far Wireless... LAS VEGAS (AP) — CES, the Consumer Technology Association's annual trade show of all-things tech, is kicking off in Las Vegas this week.Do I need to use a tool? No
Final Answer: CES 2024, the Consumer Technology Association's annual trade show, took place in Las Vegas, showcasing the latest advancements in technology. The event featured a wide range of innovations, including AI-powered devices, self-driving cars, and cutting-edge gadgets. Major companies like Nvidia and Sony unveiled their latest products, while smaller startups showcased their unique creations. CES 2024 highlighted the rapid pace of technological progress and provided a glimpse into the future of tech.

> Finished chain.

[DEBUG]: [news_reporter] Task output: CES 2024, the Consumer Technology Association's annual trade show, took place in Las Vegas, showcasing the latest advancements in technology. The event featured a wide range of innovations, including AI-powered devices, self-driving cars, and cutting-edge gadgets. Major companies like Nvidia and Sony unveiled their latest products, while smaller startups showcased their unique creations. CES 2024 highlighted the rapid pace of technological progress and provided a glimpse into the future of tech.

CES 2024, the Consumer Technology Association's annual trade show, took place in Las Vegas, showcasing the latest advancements in technology. The event featured a wide range of innovations, including AI-powered devices, self-driving cars, and cutting-edge gadgets. Major companies like Nvidia and Sony unveiled their latest products, while smaller startups showcased their unique creations. CES 2024 highlighted the rapid pace of technological progress and provided a glimpse into the future of tech.

and when I used your prompt : research new breakthroughs in AI llm agents only from 2024! code:

researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related urls,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website,
        search_tool
        # SearchTools.search_internet,
        # SearchTools.search_news
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)
task1 = Task(description= 'research new breakthroughs in AI llm agents only from 2024!',agent = researcher)

response:

[DEBUG]: Working Agent: Researcher

[INFO]: Starting Task: research new breakthroughs in AI llm agents only from 2024!

> Entering new CrewAgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: duckduckgo_search
Action Input: breakthroughs in AI llm agents 2024Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI images—the pope in a puffer coat, for example—but they tend to be more skeptical of highly ... While 2023 has witnessed significant breakthroughs in this field, the future of AI and LLMs holds even greater promise. Beyond the initial excitement, the year 2024 could mark a turning point, with the potential emergence of small language models (SLMs) as a game-changer. On Friday, Anthropic—the maker of ChatGPT competitor Claude —released a research paper about AI "sleeper agent" large language models (LLMs) that initially seem normal but can deceptively ... I believe we will look back at 2024 as the dawn of the "age of agents," the beginning of a fundamentally new direction in how we address needs through software and interact with technology.... AI agents based on multimodal large language models (LLMs) are expected to revolutionize human-computer interaction and offer more personalized assistant services across various domains like healthcare, education, manufacturing, and entertainment. Deploying LLM agents in 6G networks enables users to access previously expensive AI assistant services via mobile devices democratically, thereby ...Do I need to use a tool? No
Final Answer: In 2024, breakthroughs in AI LLMs are expected to include the emergence of small language models (SLMs) as game-changers, the development of AI "sleeper agent" large language models (LLMs), and the widespread use of AI agents based on multimodal large language models (LLMs) in various domains. These advancements are anticipated to revolutionize human-computer interaction and offer more personalized assistant services across various sectors.

> Finished chain.

[DEBUG]: [Researcher] Task output: In 2024, breakthroughs in AI LLMs are expected to include the emergence of small language models (SLMs) as game-changers, the development of AI "sleeper agent" large language models (LLMs), and the widespread use of AI agents based on multimodal large language models (LLMs) in various domains. These advancements are anticipated to revolutionize human-computer interaction and offer more personalized assistant services across various sectors.

In 2024, breakthroughs in AI LLMs are expected to include the emergence of small language models (SLMs) as game-changers, the development of AI "sleeper agent" large language models (LLMs), and the widespread use of AI agents based on multimodal large language models (LLMs) in various domains. These advancements are anticipated to revolutionize human-computer interaction and offer more personalized assistant services across various sectors.

I am not sure, but I believe the agent hallucinates and believes it does not have internet access, I ran the query multiple times till I got the response that I wanted.

FeelsDaumenMan commented 7 months ago

Hey! Wanting to try this out with Gemini too! What is the working code without any errors? Would appreciate an answers!

Sincerely,

Eddie

mindwellsolutions commented 7 months ago

@punitchauhan771 Interesting. Really appreciate all the help. I wonder if I'm setting up the duckduckgo_search function properly. In your DuckDuckGo response, it says "In 2024, breakthroughs in AI LLMs are expected to" and only references 2023 sources, which seems like the April 2023 KB still. Maybe it's specifically an error with DuckDuckGo & Gemini API. Where I would need to get BrowserTools.xx and SearchTools.xx working instead of duckduck.

DuckDuckGo_Search Error: "I am sorry, but I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a detailed tech report for CES 2024 hosted in Las Vegas"

@punitchauhan771 Could you please share your entire .py code including the imports, please. I'm also having trouble running BrowserTools.xx and SearchTools.xx. The script couldn't find "BrowserTools" nor "SearchTools". I must be setting up the imports and functions incorrectly. "NameError: name 'BrowserTools' is not defined", "NameError: name 'SearchTools' is not defined"

Would be amazing if you could share your entire py file with support for 1) Duckduckgosearch 2) BrowserTools.scrape_and_summarize_website 3) SearchTools.search_internet 4) SearchTools.search_news

Thanks in advance :)

Here is my Current Code For Reference:

import os
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI API KEY")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related urls,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm,  #using google gemini pro API
  tools=[
        search_tool
        # BrowserTools.scrape_and_summarize_website,
        # SearchTools.search_internet
      ]
)

task1 = Task(
  description="""'research new breakthroughs in AI llm agents only from 2024!""",
  agent=researcher
)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

result = crew.kickoff()

print("######################")
print(result)
punitchauhan771 commented 7 months ago

Hi @mindwellsolutions, Sure here is the code:

importing necessary modules

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import load_tools
from langchain.utilities import SerpAPIWrapper
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.cache import InMemoryCache
from langchain.tools import tool
from unstructured.partition.html import partition_html
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback
from langchain.tools import DuckDuckGoSearchRun
import json,os,datetime,requests

storing all the necessary keys

os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
os.environ["SERPAPI_API_KEY"] = SERP_API_KEY
os.environ['BROWSERLESS_API_KEY'] = BrowserLess_API_KEY
os.environ['SERPER_API_KEY'] = SERPER_API_KEY

configuring gemini LLM

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.1)

Custom tools

class BrowserTools():

  @tool("Scrape website content")
  def scrape_and_summarize_website(website):
    """Useful to scrape and summarize a website content"""
    url = f"https://chrome.browserless.io/content?token={os.environ['BROWSERLESS_API_KEY']}"
    payload = json.dumps({"url": website})
    headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
    response = requests.request("POST", url, headers=headers, data=payload)
    elements = partition_html(text=response.text)
    content = "\n\n".join([str(el) for el in elements])
    content = [content[i:i + 8000] for i in range(0, len(content), 8000)]
    summaries = []
    for chunk in content:
      agent = Agent(
          role='Principal Researcher',
          goal=
          'Do amazing researches and summaries based on the content you are working with',
          backstory=
          "You're a Principal Researcher at a big company and you need to do a research about a given topic.",
          allow_delegation=False,
          llm = llm)
      task = Task(
          agent=agent,
          description=
          f'Analyze and summarize the content bellow, make sure to include the most relevant information in the summary, return only the summary nothing else.\n\nCONTENT\n----------\n{chunk}'
      )
      summary = task.execute()
      summaries.append(summary)
    return "\n\n".join(summaries)

class SearchTools():
  @tool("Search the internet")
  def search_internet(query):
    """Useful to search the internet
    about a a given topic and return relevant results"""
    top_result_to_return = 4
    url = "https://google.serper.dev/search"
    payload = json.dumps({"q": query})
    headers = {
        'X-API-KEY': os.environ['SERPER_API_KEY'],
        'content-type': 'application/json'
    }
    response = requests.request("POST", url, headers=headers, data=payload)
    print(response.json())
    results = response.json()['organic']
    stirng = []
    for result in results[:top_result_to_return]:
      try:
        stirng.append('\n'.join([
            f"Title: {result['title']}", f"Link: {result['link']}",
            f"Snippet: {result['snippet']}", "\n-----------------"
        ]))
      except KeyError:
        next

    return '\n'.join(stirng)

  @tool("Search news on the internet")
  def search_news(query):
    """Useful to search news about a company, stock or any other
    topic an return relevant results"""""
    top_result_to_return = 4
    url = "https://google.serper.dev/news"
    payload = json.dumps({"q": query})
    headers = {
        'X-API-KEY': os.environ['SERPAPI_API_KEY'],
        'content-type': 'application/json'
    }
    response = requests.request("POST", url, headers=headers, data=payload)
    results = response.json()['news']
    stirng = []
    for result in results[:top_result_to_return]:
      try:
        stirng.append('\n'.join([
            f"Title: {result['title']}", f"Link: {result['link']}",
            f"Snippet: {result['snippet']}", "\n-----------------"
        ]))
      except KeyError:
        next

    return '\n'.join(stirng)

creating agents and crew

search_tool = DuckDuckGoSearchRun()
researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related urls,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        BrowserTools.scrape_and_summarize_website,
        search_tool
        # SearchTools.search_internet,
        # SearchTools.search_news
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)

news_reporter = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in the field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool
        # SearchTools.search_internet,
      ]
  # llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
)
task1 = Task(description= 'research new breakthroughs in AI llm agents only from 2024!',agent = news_reporter)

crew = Crew(
  agents=[news_reporter],
  tasks=[task1],
  verbose=2, # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
  process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
)
crew.kickoff()

you can get more tools code in crew-ai example codebase. P.S. i am using google colab for this.

punitchauhan771 commented 7 months ago

Hi @FeelsDaumenMan, you can use @janda-datascience code 🙂.

FeelsDaumenMan commented 7 months ago

@punitchauhan771 Interesting. Really appreciate all the help. I wonder if I'm setting up the duckduckgo_search function properly. In your DuckDuckGo response, it says "In 2024, breakthroughs in AI LLMs are expected to" and only references 2023 sources, which seems like the April 2023 KB still. Maybe it's specifically an error with DuckDuckGo & Gemini API. Where I would need to get BrowserTools.xx and SearchTools.xx working instead of duckduck.

DuckDuckGo_Search Error: "I am sorry, but I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a detailed tech report for CES 2024 hosted in Las Vegas"

@punitchauhan771 Could you please share your entire .py code including the imports, please. I'm also having trouble running BrowserTools.xx and SearchTools.xx. The script couldn't find "BrowserTools" nor "SearchTools". I must be setting up the imports and functions incorrectly. "NameError: name 'BrowserTools' is not defined", "NameError: name 'SearchTools' is not defined"

Would be amazing if you could share your entire py file with support for 1) Duckduckgosearch 2) BrowserTools.scrape_and_summarize_website 3) SearchTools.search_internet 4) SearchTools.search_news

Thanks in advance :)

Here is my Current Code For Reference:

import os
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="AIzaSyBjeFgdc7FYsXnRLFquUgGzLKvwn5aVS8I")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='Researcher',
  goal='You provide results on the basis of Facts and only Facts along with supported doc related urls,You go to the root cause and give the best possible outcomes',
  backstory="You're an ai researcher who researches on the field of  AI and have won multiple awards",
  verbose=True,
  allow_delegation=False,
  llm = llm,  #using google gemini pro API
  tools=[
        search_tool
        # BrowserTools.scrape_and_summarize_website,
        # SearchTools.search_internet
      ]
)

task1 = Task(
  description="""'research new breakthroughs in AI llm agents only from 2024!""",
  agent=researcher
)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

result = crew.kickoff()

print("######################")
print(result)

You leaked ur api key

mindwellsolutions commented 7 months ago

@FeelsDaumenMan Thanks. I removed it, can you delete your quote that also has the key.

@punitchauhan771 I ended up getting CrewAI + Gemini API to work using the news_reporter role settings you posted. Every couple times it says no internet, but works most of the time. I guess if only we could avoid those "no internet" glitches that appear here and there. Really appreciate your help, it's working pretty well now.

Thought: Do I need to use a tool? Yes
Action: duckduckgo_search
Action Input: Health and Wellness products shown at ces 2024 hosted in las vegasJessica Dolcourt/CNET. The big diabetes device companies had a presence at CES this year, showcasing the latest improvements in technology for people with Type 1 diabetes. Last summer, Tandem ... At CES 2024, digital health devices moved beyond the wrist. From mental-health-focused smart mirrors to app-adjustable mattresses, our top 10 picks show that the wellness tech market is rife with ... The best of CES 2024: our favorite home, health, and wellness tech We went to Las Vegas to get hands on with the latest in consumer tech. Hundreds of new products are announced at the Consumer ... LAS VEGAS (AP) — CES 2024 kicks off in Las Vegas this week. The multi-day trade event put on by the Consumer Technology Association is set to feature swaths of the latest advances and gadgets across personal tech, transportation, health care, sustainability and more — with burgeoning uses of artificial intelligence almost everywhere you look.. The Associated Press will be keeping a running ... Alienware 32-Inch 4K Curved QD-OLED Gaming Monitor (AW3225QF) $ 1,199.99. Dell. OLED screen technology was everywhere this year at CES, particularly in gaming. OLED screens typically deliver ...Do I need to use a tool? No
Final Answer: The Consumer Electronics Show (CES) 2024, held in Las Vegas, Nevada, showcased a range of innovative health and wellness products. Here are some notable highlights:

1. Smart Mirrors: Several companies unveiled smart mirrors that provide personalized health and wellness insights. These mirrors use AI and sensors to analyze users' vital signs, skin health, and sleep patterns, offering tailored recommendations for improving overall well-being.

2. Mental Health Devices: CES 2024 featured various devices designed to promote mental health and well-being. These included wearable devices that track stress levels and provide relaxation techniques, as well as AI-powered apps that offer personalized therapy and mindfulness exercises.

3. Sleep Tech: Sleep-related products were prominent at CES 2024. Companies showcased smart mattresses that adjust firmness and temperature to optimize sleep quality, as well as wearable devices that monitor sleep patterns and provide personalized sleep coaching.

4. Fitness Trackers and Wearables: The latest fitness trackers and wearables at CES 2024 offered advanced features for tracking physical activity, heart rate, and overall fitness levels. Some devices also incorporated AI to provide personalized workout recommendations and monitor progress towards fitness goals.

5. Digital Health Platforms: Several companies showcased digital health platforms that integrate data from various health and wellness devices. These platforms provide users with a comprehensive view of their health and allow them to track progress, set goals, and receive personalized health advice.

6. AI-Powered Health Assistants: CES 2024 saw the introduction of AI-powered health assistants that use voice commands to control various health and wellness devices. These assistants can provide personalized health advice, remind users to take medications, and schedule appointments.

7. Telehealth Solutions: Telehealth solutions were also highlighted at CES 2024. Companies showcased virtual reality (VR) and augmented reality (AR) technologies that enable remote medical consultations and immersive healthcare experiences.

These health and wellness products showcased at CES 2024 demonstrate the growing integration of technology into healthcare and wellness, offering consumers innovative ways to manage their health and well-being.

> Finished chain.
punitchauhan771 commented 7 months ago

@mindwellsolutions, I tried this with chat-gpt-3.5 turbo also, It has the same problem, the agent hallucinates and thinks it cannot access the tools or search, I guess lower level models have this issue.

mindwellsolutions commented 7 months ago

@FeelsDaumenMan I ran into the same issue on Windows. Don't import DuckDuck from langchain, instead install duckduckgo_search directly. Then remove "from langchain_community.tools import DuckDuckGoSearchRun" from your code.

How to best Install duckduckgo_search for CrewAI: pip install -U duckduckgo-search

mindwellsolutions commented 7 months ago

@punitchauhan771 Thought of a simple solution to get around the glitch where every couple API requests gemini gets the no internet connection error, even though duckduckgo_search completes properly.

If CrewAI, when using Gemini prints "I apologize, but I do not have access to real-time information or the ability to browse the internet" then CrewAI should automatically reprocess the task (hopefully where it left off after the successful duckduckgo_search response it can't find. Until it doesn't receive that error. This will automatically rerun the LLM's processing of duckduckgo's research until it works properly.

FeelsDaumenMan commented 7 months ago

pip install -U duckduckgo-search

hmmm, weird. Tried it all. Could you send your exact code that worked on windows? Would appreciate it.

mindwellsolutions commented 7 months ago

@FeelsDaumenMan , I ran into the same error and should be able to trace back the steps of how I resolved if with that error details.

Here is a simple crewai python script that works for me with gemini. Although, every 3-4 times it says it can't find internet - just run it again.

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)

If its still an issue. Can you share the full errors you are getting with "from langchain_community.tools" saying community tools is depreciated again. I can't find them.

FeelsDaumenMan commented 7 months ago

@FeelsDaumenMan , I ran into the same error and should be able to trace back the steps of how I resolved if with that error details.

Here is a simple crewai python script that works for me with gemini. Although, every 3-4 times it says it can't find internet - just run it again.

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)

If its still an issue. Can you share the full errors you are getting with "from langchain_community.tools" saying community tools is depreciated again. I can't find them.

Sure! My current error when running it with your code is this:(crewai) C:\ProgramData\anaconda3\envs\CrewAI\crewAI-0.1.32\src\crewai>python agent.py C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\callbacks__init__.py:37: LangChainDeprecationWarning: Importing this callback from langchain is deprecated. Importing it from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.callbacks import get_openai_callback.

To install langchain-community run pip install -U langchain-community. warnings.warn( C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools__init__.py:63: LangChainDeprecationWarning: Importing tools from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.tools import DuckDuckGoSearchRun.

To install langchain-community run pip install -U langchain-community. warnings.warn(

[DEBUG]: Working Agent: news_reporter

[INFO]: Starting Task: give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic

Entering new CrewAgentExecutor chain... Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 2.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 4.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

punitchauhan771 commented 7 months ago

@FeelsDaumenMan , I ran into the same error and should be able to trace back the steps of how I resolved if with that error details. Here is a simple crewai python script that works for me with gemini. Although, every 3-4 times it says it can't find internet - just run it again.

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)

If its still an issue. Can you share the full errors you are getting with "from langchain_community.tools" saying community tools is depreciated again. I can't find them.

Sure! My current error when running it with your code is this:(crewai) C:\ProgramData\anaconda3\envs\CrewAI\crewAI-0.1.32\src\crewai>python agent.py C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\callbacksinit.py:37: LangChainDeprecationWarning: Importing this callback from langchain is deprecated. Importing it from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.callbacks import get_openai_callback.

To install langchain-community run pip install -U langchain-community. warnings.warn( C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\toolsinit.py:63: LangChainDeprecationWarning: Importing tools from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.tools import DuckDuckGoSearchRun.

To install langchain-community run pip install -U langchain-community. warnings.warn(

[DEBUG]: Working Agent: news_reporter

[INFO]: Starting Task: give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic

Entering new CrewAgentExecutor chain... Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 2.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 4.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

Hi @FeelsDaumenMan , can you once confirm that your country/region has google gemini-pro access available? because the agents message states that you dont have access to the gemini-pro Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

also you can just ask bard 'Does bard currently runs on gemini Pro?' to confirm the gemini-pro access.

FeelsDaumenMan commented 7 months ago

@FeelsDaumenMan , I ran into the same error and should be able to trace back the steps of how I resolved if with that error details. Here is a simple crewai python script that works for me with gemini. Although, every 3-4 times it says it can't find internet - just run it again.

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)

If its still an issue. Can you share the full errors you are getting with "from langchain_community.tools" saying community tools is depreciated again. I can't find them.

Sure! My current error when running it with your code is this:(crewai) C:\ProgramData\anaconda3\envs\CrewAI\crewAI-0.1.32\src\crewai>python agent.py C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\callbacksinit.py:37: LangChainDeprecationWarning: Importing this callback from langchain is deprecated. Importing it from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead: from langchain_community.callbacks import get_openai_callback. To install langchain-community run pip install -U langchain-community. warnings.warn( C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\toolsinit.py:63: LangChainDeprecationWarning: Importing tools from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead: from langchain_community.tools import DuckDuckGoSearchRun. To install langchain-community run pip install -U langchain-community. warnings.warn( [DEBUG]: Working Agent: news_reporter [INFO]: Starting Task: give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic

Entering new CrewAgentExecutor chain... Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 2.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 4.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use.. Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

Hi @FeelsDaumenMan , can you once confirm that your country/region has google gemini-pro access available? because the agents message states that you dont have access to the gemini-pro Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

also you can just ask bard 'Does bard currently runs on gemini Pro?' to confirm the gemini-pro access.

I currently live in Germany, we natively don’t have access to Gemini. I thought that if I use a VPN that I would bypass it wich worked for google studio. Seems like I can’t use it then. Altlast it was worth a try, seems like all the AI stuff I want to use is not available.

This sucks :/

still thanks for the help

mindwellsolutions commented 7 months ago

@FeelsDaumenMan Have you tried setting up a new free Google Colab account with a VPN connected to the U.S.A and running this code on Google Colab rather than your local computer? That might be an easy option to get this working.

Or setting up a VPS Shell at a service like OCVCLOUD's US servers would be another option for $2/month with their Starter Tier. Just make sure to connect to U.S. VPN when setting up the account. https://us.ovhcloud.com/vps/

FeelsDaumenMan commented 7 months ago

@FeelsDaumenMan Have you tried setting up a new free Google Colab account with a VPN connected to the U.S.A and running this code on Google Colab rather than your local computer? That might be an easy option to get this working.

Or setting up a VPS Shell at a service like OCVCLOUD's US servers would be another option for $2/month with their Starter Tier. Just make sure to connect to U.S. VPN when setting up the account. https://us.ovhcloud.com/vps/

Currently stuck at creating a American Google account when using an VPN, seems like I currently can’t create one without confirming it with an American number, but running it over an U.S collab account should work. I’ll keep trying. Hoping I’m getting this to work somehow.

Appreciate all the help

Sincerely

eddie

FeelsDaumenMan commented 7 months ago

@FeelsDaumenMan Have you tried setting up a new free Google Colab account with a VPN connected to the U.S.A and running this code on Google Colab rather than your local computer? That might be an easy option to get this working.

Or setting up a VPS Shell at a service like OCVCLOUD's US servers would be another option for $2/month with their Starter Tier. Just make sure to connect to U.S. VPN when setting up the account. https://us.ovhcloud.com/vps/

WOWWWWW! AMAZING TOOL! seems like that the recovery number can be from any country. Works wonderful in the collab!

mindwellsolutions commented 6 months ago

@FeelsDaumenMan . If you run into the issue we were discussing where Gemini Pro Free API says it cannot connect to the internet and doesn't return a response. You may want to look into using Google Colab + CrewAI + Ollama (with a 7b Local LLM like Zephyr 7b). It will run on Google Colab for free and will perform substantially better on CrewAI tasks, with higher consistency than the current state of Gemini Pro API + CrewAI + DuckDuckGo_Search.

Easy Guide to switching from Gemini API to Zephyr 7B Local LLM using Ollama on Google Colab:

CrewAI with Ollama ("zephyr") Zephyr 7B works well. It's important to tell the research agent to use "search_tool" (DuckDuckGoSearch) when performing it's research to ensure it browses online.

(Install Ollama with Zephyr 7b Local LLM in Google Colab (Juptyer python) Step 1) !curl https://ollama.ai/install.sh Step 2) !sh Step 3) !ollama serve Step 4) !ollama run zephyr Step 5) After Ollama is running, run the python script below

Full CrewAI .py code for Substituting Ollama ("zephyr") as local llm in place of Gemini API:

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = Ollama(model="zephyr")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[
        # BrowserTools.scrape_and_summarize_website,
        search_tool #duckduckgo search
        # SearchTools.search_internet,
      ]
)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)
mindwellsolutions commented 6 months ago

Sorry just realized I still had gemini in the code. I updated it with Ollama/zephyr (7b):

llm = Ollama(model="zephyr")

You can also run any other Ollama LLM model instead of "zephyr". Choose one by name from https://ollama.ai/library . and replace zephyr in Step 4.

ZihaoXingUP commented 6 months ago

Define your agents with roles and goals

researcher = Agent( role='Senior Research Analyst', goal='Uncover cutting-edge developments in AI and data science', backstory="""You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, llm = llm, allow_delegation=False, tools=[], )

writer = Agent( role='Tech Content Strategist', goal='Craft compelling content on tech advancements', backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.""", verbose=True, allow_delegation=True, llm = llm, tools=[], )

task1 = Task( description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024. Identify key trends, breakthrough technologies, and potential industry impacts. Your final answer MUST be a full analysis report""", agent=researcher )

task2 = Task( description="""Using the insights provided, develop an engaging blog post that highlights the most significant AI advancements. Your post should be informative yet accessible, catering to a tech-savvy audience. Make it sound cool, avoid complex words so it doesn't sound like AI. Your final answer MUST be the full blog post of at least 4 paragraphs.""", agent=writer )

Instantiate your crew with a sequential process

crew = Crew( agents=[researcher, writer], tasks=[task1, task2], verbose=2, )

Get your crew to work!

result = crew.kickoff()

print("######################") print(result)

Thank you for the update

how do you create the SearchTools and BrowserTools?

souvikcs commented 6 months ago

I can run genai.configure(api_key=os.getenv('GOOGLE_API_KEY')) model = genai.GenerativeModel('gemini-pro') response = model.generate_content("What is prompt engineering?")". ######## But using the same API key llm = ChatGoogleGenerativeAI(model="gemini-pro", verbose = True, temperature = 0.5, google_api_key="AIxxxx") tool_search = DuckDuckGoSearchRun()

Define Agents

email_author = Agent( role='Professional Email Author', goal='Craft concise and engaging emails', backstory='Experienced in writing impactful marketing emails.', verbose=True, allow_delegation=False, llm=llm, tools=[ tool_search ] ) marketing_strategist = Agent( role='Marketing Strategist', goal='Lead the team in creating effective cold emails', backstory='A seasoned Chief Marketing Officer with a keen eye for standout marketing content.', verbose=True, allow_delegation=True, llm=llm )

content_specialist = Agent( role='Content Specialist', goal='Critique and refine email content', backstory='A professional copywriter with a wealth of experience in persuasive writing.', verbose=True, allow_delegation=False, llm=llm )

Define Task

email_task = Task( description='''1. Generate two distinct variations of a cold email promoting a video editing solution.

  1. Evaluate the written emails for their effectiveness and engagement.
  2. Scrutinize the emails for grammatical correctness and clarity.
  3. Adjust the emails to align with best practices for cold outreach. Consider the feedback provided to the marketing_strategist.
  4. Revise the emails based on all feedback, creating two final versions.''', agent=marketing_strategist # The Marketing Strategist is in charge and can delegate )

Create a Single Crew

email_crew = Crew( agents=[email_author, marketing_strategist, content_specialist], tasks=[email_task], verbose=True, process=Process.sequential )

Execution Flow

print("Crew: Working on Email Task") emails_output = email_crew.kickoff()

I am getting the error langchain_google_genai.chat_models:Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 2.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

punitchauhan771 commented 6 months ago

@FeelsDaumenMan , I ran into the same error and should be able to trace back the steps of how I resolved if with that error details.

Here is a simple crewai python script that works for me with gemini. Although, every 3-4 times it says it can't find internet - just run it again.


import os

import datetime

from langchain_google_genai import ChatGoogleGenerativeAI

from crewai import Agent, Task, Crew, Process

from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",verbose = True,temperature = 0.6,google_api_key="<ENTER GEMINI KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

search_tool = DuckDuckGoSearchRun()

researcher = Agent(

  role='news_reporter',

  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',

  backstory="You're an professional news reporter who covers tech related news and are the best in that field",

  verbose=True,

  allow_delegation=False,

  llm = llm,

  tools=[

        # BrowserTools.scrape_and_summarize_website,

        search_tool #duckduckgo search

        # SearchTools.search_internet,

      ]

)

task1 = Task(description= 'give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher)

crew = Crew(

  agents=[researcher],

  tasks=[task1],

  verbose=2, 

  process=Process.sequential

)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:

  result = crew.kickoff()

  print(result)

  print(cb)

If its still an issue. Can you share the full errors you are getting with "from langchain_community.tools" saying community tools is depreciated again. I can't find them.

Sure! My current error when running it with your code is this:(crewai) C:\ProgramData\anaconda3\envs\CrewAI\crewAI-0.1.32\src\crewai>python agent.py C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\callbacksinit.py:37: LangChainDeprecationWarning: Importing this callback from langchain is deprecated. Importing it from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.callbacks import get_openai_callback.

To install langchain-community run pip install -U langchain-community. warnings.warn( C:\Users\Edgar\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\toolsinit.py:63: LangChainDeprecationWarning: Importing tools from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.tools import DuckDuckGoSearchRun.

To install langchain-community run pip install -U langchain-community. warnings.warn(

[DEBUG]: Working Agent: news_reporter

[INFO]: Starting Task: give me a detailed overview of the Health and Wellness products showcased at ces 2024 hosted in las vegas using only 2024. Cover 10 topics with 5 paragraphs of text for each topic

Entering new CrewAgentExecutor chain...

Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 2.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 4.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

Hi @FeelsDaumenMan ,

can you once confirm that your country/region has google gemini-pro access available?

because the agents message states that you dont have access to the gemini-pro

Retrying langchain_google_genai.chat_models._chat_with_retry.._chat_with_retry in 8.0 seconds as it raised FailedPrecondition: 400 User location is not supported for the API use..

also you can just ask bard 'Does bard currently runs on gemini Pro?' to confirm the gemini-pro access.

Hi @souvikcs , Can you refer to this thread , you might have similar issue.

teamgroove commented 5 months ago

You could also just use this new proxy-solution to do it :) (i didn't test it, please report success, if you try it) https://github.com/PublicAffairs/openai-gemini

mindwellsolutions commented 5 months ago

I got Google Gemini Pro API working substantially better than before thanks to joaomdmoura's post on another issue thread that provides a new DuckDuckGoSearch function that seems to play better with gemini pro api now.

However the issue with Gemini-Pro Free API is google throttles the usage and queries based on their server load. This means that any query or string of queries that is sent to gemini-pro appears to have the potential to be denied with a non explicit or 504/503 error from google, which in turn fails to process portions of the crewai sequences, thus creates issues. It may be a limitation on the free version google has, that dictates the inconsistency that gemini-pro api responds to queries.

How to use Gemini-Pro API with CrewAI & new DuckDuckGoSearch Function:

You will need to likely reinstall newest version of crewai from scratch: pip uninstall crewai crewai_tools; pip install 'crewai[tools]'

Fix: Original DuckDuckGo definition that was replaced search = DuckDuckGoSearchRun()

New better functioning code using crewai_tools version of duckduckgosearch

@tool('DuckDuckGoSearch')
def search(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

Full Code: gemini-pro API with CrewAI (Basic Crew AI Code Example)

import os
import datetime
from langchain_google_genai import ChatGoogleGenerativeAI
from crewai import Agent, Task, Crew, Process
from langchain.callbacks import get_openai_callback

llm = ChatGoogleGenerativeAI(model="gemini-pro",convert_system_message_to_human=True,verbose = True,temperature = 0.6,google_api_key="<ENTER GOOGLE GEMINI API KEY HERE>")

from langchain.tools import DuckDuckGoSearchRun

from crewai_tools import tool

@tool('DuckDuckGoSearch')
def search(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

researcher = Agent(
  role='news_reporter',
  goal=f'You provide the latest news as of available till {datetime.datetime.now()}',
  backstory="You're an professional news reporter who covers tech related news and are the best in that field",
  verbose=True,
  allow_delegation=False,
  llm = llm,
  tools=[search]
)

task1 = Task(description= 'give me a detailed overview of the AI brands and products showcased at ces 2024 hosted in las vegas using only 2024 articles. Cover 10 topics with 5 paragraphs of text for each topic',agent = researcher,expected_output='A refined finalized version of the blog post in markdown format')

crew = Crew(
  agents=[researcher],
  tasks=[task1],
  verbose=2, 
  process=Process.sequential
)

# This counts the amount of Gemini API Requests completed by the script. This is helpful given the 60 API requests per minute limit from gemini pro free api.

with get_openai_callback() as cb:
  result = crew.kickoff()
  print(result)
  print(cb)
OmarAlsaqa commented 4 months ago

I am using this code:

import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import tool

from langchain_community.tools import DuckDuckGoSearchRun
from langchain.callbacks import get_openai_callback
from langchain.agents import load_tools

# pip install --upgrade --quiet  langchain-google-genai
from langchain_google_genai import ChatGoogleGenerativeAI

os.environ["GOOGLE_API_KEY"] = ""
os.environ["LANGCHAIN_API_KEY"] = ""
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Human Feed-Back Gemini"

llm = ChatGoogleGenerativeAI(
    model="gemini-pro",
    convert_system_message_to_human=True,
    verbose=True,
    temperature=0.1,
)

# Loading Tools
human_tools = load_tools(["human"], llm=llm)

# search_tool = DuckDuckGoSearchRun()
@tool('DuckDuckGoSearch')
def search_tool(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

# Define your agents with roles, goals, and tools
researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments in AI and data science",
    backstory=(
        "You are a Senior Research Analyst at a leading tech think tank."
        "Your expertise lies in identifying emerging trends and technologies in AI and data science."
        "You have a knack for dissecting complex data and presenting actionable insights."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[search_tool] + human_tools,  # Passing human tools to the agent
    llm=llm,
)
writer = Agent(
    role="Tech Content Strategist",
    goal="Craft compelling content on tech advancements",
    backstory=(
        "You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
        "With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
    ),
    verbose=True,
    allow_delegation=True,
    llm=llm,
)

# Create tasks for your agents
task1 = Task(
    description=(
        "Conduct a comprehensive analysis of the latest advancements in AI in 2024."
        "Identify key trends, breakthrough technologies, and potential industry impacts."
        "Compile your findings in a detailed report."
        "Make sure to check with a human if the draft is good before finalizing your answer."
    ),
    expected_output="A comprehensive full report on the latest AI advancements in 2024, leave nothing out",
    agent=researcher,
)

task2 = Task(
    description=(
        "Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
        "Your post should be informative yet accessible, catering to a tech-savvy audience."
        "Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
    ),
    expected_output="A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024",
    agent=writer,
)

# Instantiate your crew with a sequential process
crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose=2,
    process=Process.sequential,
)

# Get your crew to work!
with get_openai_callback() as cb:
    result = crew.kickoff()
    print(result)
    print(cb)

And I get this error. The No good DuckDuckGo Search Result was found is always showing (ran it more than 20 times).

 [DEBUG]: == Working Agent: Senior Research Analyst
 [INFO]: == Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.Identify key trends, breakthrough technologies, and potential industry impacts.Compile your findings in a detailed report.Make sure to check with a human if the draft is good before finalizing your answer.

> Entering new CrewAgentExecutor chain...
Action: human
Action Input: {} 

I encountered an error while trying to use the tool. This was the error: HumanInputRun._run() missing 1 required positional argument: 'query'.
 Tool human accepts these inputs: You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.

Thought: I should search for the latest advancements in AI in 2024
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2024"} 

No good DuckDuckGo Search Result was found

Thought: I should search for the latest advancements in AI in 2023
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2023"} 

No good DuckDuckGo Search Result was found

Thought: I should search for the latest advancements in AI in 2022
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2022"} 

No good DuckDuckGo Search Result was found

Thought: I should ask a human for help
Action: human
Action Input: {} 

I encountered an error while trying to use the tool. This was the error: HumanInputRun._run() missing 1 required positional argument: 'query'.
 Tool human accepts these inputs: You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.

Thought: I should search for the latest advancements in AI in 2023
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2023"} 

No good DuckDuckGo Search Result was found

Thought: I should search for the latest advancements in AI in 2022
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2022"} 

I tried reusing the same input, I must stop using this action input. I'll try something else instead.

Thought: I should search for the latest advancements in AI in 2022
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2021"} 

No good DuckDuckGo Search Result was found

Thought: I should ask a human for help
Action: human
Action Input: {} 

I encountered an error while trying to use the tool. This was the error: HumanInputRun._run() missing 1 required positional argument: 'query'.
 Tool human accepts these inputs: You can ask a human for guidance when you think you got stuck or you are not sure what to do next. The input should be a question for the human.

Thought: I should search for the latest advancements in AI in 2021
Action: DuckDuckGoSearch
Action Input: {"search_query": "latest advancements in AI in 2021"} 

I tried reusing the same input, I must stop using this action input. I'll try something else instead.

Thought: I should search for the latest advancements in AI in 2021
Action: DuckDuckGoSearch
Action Input: {"search_query": "AI advancements in 2021"} 

No good DuckDuckGo Search Result was found

without human tools, I got this error:

 [DEBUG]: == Working Agent: Senior Research Analyst
 [INFO]: == Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.Identify key trends, breakthrough technologies, and potential industry impacts.Compile your findings in a detailed report.Make sure to check with a human if the draft is good before finalizing your answer.

> Entering new CrewAgentExecutor chain...
Action: DuckDuckGoSearch
Action Input: {"search_query": "AI advancements in 2024"} 

No good DuckDuckGo Search Result was found

Thought: 
Action: DuckDuckGoSearch
Action Input: {"search_query": "AI advancements in 2023"} 

No good DuckDuckGo Search Result was found

Thought: I now know the final answer
Final Answer: I am sorry, I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a comprehensive analysis of the latest advancements in AI in 2024.

> Finished chain.
 [DEBUG]: == [Senior Research Analyst] Task output: I am sorry, I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a comprehensive analysis of the latest advancements in AI in 2024.

 [DEBUG]: == Working Agent: Tech Content Strategist
 [INFO]: == Starting Task: Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements.Your post should be informative yet accessible, catering to a tech-savvy audience.Aim for a narrative that captures the essence of these breakthroughs and their implications for the future.

> Entering new CrewAgentExecutor chain...
Thought: I need to ask my coworker about the latest AI advancements in 2024
Action: Ask question to co-worker
Action Input: {
    "coworker": "Senior Research Analyst",
    "question": "What are the most significant AI advancements in 2024?",
    "context": "I am writing a blog post about the latest AI advancements in 2024 and I need to know what the most significant ones are."
}

> Entering new CrewAgentExecutor chain...

> Entering new CrewAgentExecutor chain...

> Entering new CrewAgentExecutor chain...

> Entering new CrewAgentExecutor chain...

> Entering new CrewAgentExecutor chain...

> Entering new CrewAgentExecutor chain...

I encountered an error while trying to use the tool. This was the error: list index out of range.
 Tool Ask question to co-worker accepts these inputs: Ask question to co-worker(coworker: str, question: str, context: str) - Ask a specific question to one of the following co-workers: ['Senior Research Analyst']
The input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.

Thought: I do not have access to real-time information, so I cannot provide a comprehensive analysis of the latest advancements in AI in 2024.
Final Answer: I apologize, but I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a comprehensive analysis of the latest advancements in AI in 2024.

> Finished chain.
 [DEBUG]: == [Tech Content Strategist] Task output: I apologize, but I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a comprehensive analysis of the latest advancements in AI in 2024.

I apologize, but I do not have access to real-time information and my knowledge cutoff is April 2023. Therefore, I cannot provide you with a comprehensive analysis of the latest advancements in AI in 2024.
Tokens Used: 0
        Prompt Tokens: 0
        Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0

What is the reason for the DuckDuckGoSearch error No good DuckDuckGo Search Result was found? How to get it to work? I tried both (with the same error):

search_tool = DuckDuckGoSearchRun()

and

@tool('DuckDuckGoSearch')
def search_tool(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

Also, Is there a way to use human tools with Gemini?

Thanks.

punitchauhan771 commented 4 months ago

llm = ChatGoogleGenerativeAI( model="gemini-pro", convert_system_message_to_human=True, verbose=True, temperature=0.1, )

Hi @OmarAlsaqa

Did you try increasing the temperature? Gemini acts weird with low temperature in crew ai. Can you try it again while setting the temperature at 0.6 ?

OmarAlsaqa commented 4 months ago

Thanks @punitchauhan771 for your reply, I tried different temperatures with the same problem. I got the same issue with ChatGPT 4.0 today, maybe I am doing something wrong or there is an issue with DuckDuckGo Api. I used it before but a while ago and it was working with GPT 4.0 and crewAI.

OmarAlsaqa commented 4 months ago

It worked, after updating duckduckgo-search package from 5.1.0 to latest 5.2.2 using:

pip install --upgrade --quiet  duckduckgo-search

Also, human tools are working with it, with the same code.

OmarAlsaqa commented 4 months ago

Using human_tools = load_tools(["human"]) is not always working. Instead, I tried HumanInputRun for 10 times and each time is doing as expected.

The full code to use Gemini with human input:

import os

from crewai import Agent, Task, Crew, Process
from crewai_tools import tool

from langchain_community.tools import DuckDuckGoSearchRun, HumanInputRun
from langchain_community.callbacks import get_openai_callback
from langchain_google_genai import ChatGoogleGenerativeAI

os.environ["GOOGLE_API_KEY"] = ""
os.environ["LANGCHAIN_API_KEY"] = ""
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "Human Feed-Back Gemini"

llm = ChatGoogleGenerativeAI(
    model="gemini-pro",
    convert_system_message_to_human=True,
    verbose=True,
    temperature=0.8,
)

# Loading Tools
@tool('HumanInputTool')
def human_input_tool(query: str):
    """Human Input as a tool"""
    return HumanInputRun().run(query)

@tool('DuckDuckGoSearch')
def search_tool(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

# Define your agents with roles, goals, and tools
researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments in AI and data science",
    backstory=(
        "You are a Senior Research Analyst at a leading tech think tank."
        "Your expertise lies in identifying emerging trends and technologies in AI and data science."
        "You have a knack for dissecting complex data and presenting actionable insights."
    ),
    verbose=True,
    allow_delegation=False,
    tools=[search_tool, human_input_tool],  # Passing human tools to the agent
    llm=llm,
)
writer = Agent(
    role="Tech Content Strategist",
    goal="Craft compelling content on tech advancements",
    backstory=(
        "You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
        "With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
    ),
    verbose=True,
    allow_delegation=True,
    llm=llm,
)

# Create tasks for your agents
task1 = Task(
    description=(
        "Conduct a comprehensive analysis of the latest advancements in AI in 2024."
        "Identify key trends, breakthrough technologies, and potential industry impacts."
        "Compile your findings in a detailed report."
        "Make sure to check with a human if the draft is good before finalizing your answer."
    ),
    expected_output="A comprehensive full report on the latest AI advancements in 2024, leave nothing out",
    agent=researcher,
)

task2 = Task(
    description=(
        "Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
        "Your post should be informative yet accessible, catering to a tech-savvy audience."
        "Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
    ),
    expected_output="A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024",
    agent=writer,
)

# Instantiate your crew with a sequential process
crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose=2,
    process=Process.sequential,
)

# Get your crew to work!
with get_openai_callback() as cb:
    result = crew.kickoff()
    print(result)
    print(cb)
punitchauhan771 commented 4 months ago

Using human_tools = load_tools(["human"]) is not always working. Instead, I tried HumanInputRun for 10 times and each time is doing as expected.

The full code to use Gemini with human input:


import os

from crewai import Agent, Task, Crew, Process

from crewai_tools import tool

from langchain_community.tools import DuckDuckGoSearchRun, HumanInputRun

from langchain_community.callbacks import get_openai_callback

from langchain_google_genai import ChatGoogleGenerativeAI

os.environ["GOOGLE_API_KEY"] = ""

os.environ["LANGCHAIN_API_KEY"] = ""

os.environ["LANGCHAIN_TRACING_V2"] = "true"

os.environ["LANGCHAIN_PROJECT"] = "Human Feed-Back Gemini"

llm = ChatGoogleGenerativeAI(

    model="gemini-pro",

    convert_system_message_to_human=True,

    verbose=True,

    temperature=0.8,

)

# Loading Tools

@tool('HumanInputTool')

def human_input_tool(query: str):

    """Human Input as a tool"""

    return HumanInputRun().run(query)

@tool('DuckDuckGoSearch')

def search_tool(search_query: str):

    """Search the web for information on a given topic"""

    return DuckDuckGoSearchRun().run(search_query)

# Define your agents with roles, goals, and tools

researcher = Agent(

    role="Senior Research Analyst",

    goal="Uncover cutting-edge developments in AI and data science",

    backstory=(

        "You are a Senior Research Analyst at a leading tech think tank."

        "Your expertise lies in identifying emerging trends and technologies in AI and data science."

        "You have a knack for dissecting complex data and presenting actionable insights."

    ),

    verbose=True,

    allow_delegation=False,

    tools=[search_tool, human_input_tool],  # Passing human tools to the agent

    llm=llm,

)

writer = Agent(

    role="Tech Content Strategist",

    goal="Craft compelling content on tech advancements",

    backstory=(

        "You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."

        "With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."

    ),

    verbose=True,

    allow_delegation=True,

    llm=llm,

)

# Create tasks for your agents

task1 = Task(

    description=(

        "Conduct a comprehensive analysis of the latest advancements in AI in 2024."

        "Identify key trends, breakthrough technologies, and potential industry impacts."

        "Compile your findings in a detailed report."

        "Make sure to check with a human if the draft is good before finalizing your answer."

    ),

    expected_output="A comprehensive full report on the latest AI advancements in 2024, leave nothing out",

    agent=researcher,

)

task2 = Task(

    description=(

        "Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."

        "Your post should be informative yet accessible, catering to a tech-savvy audience."

        "Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."

    ),

    expected_output="A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024",

    agent=writer,

)

# Instantiate your crew with a sequential process

crew = Crew(

    agents=[researcher, writer],

    tasks=[task1, task2],

    verbose=2,

    process=Process.sequential,

)

# Get your crew to work!

with get_openai_callback() as cb:

    result = crew.kickoff()

    print(result)

    print(cb)

Hi Just a small doubt. How do you know the response that you are getting from result is latest report (DuckDuckGo search result) and not Gemini's in memory output (because Gemini hallucinates most of the time)?

ref code:


# Define your agents with roles, goals, and tools
researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments in AI and data science",
    backstory=(
        "You are a Senior Research Analyst at a leading tech think tank."
        "Your expertise lies in identifying emerging trends and technologies in AI and data science."
        "You have a knack for dissecting complex data and presenting actionable insights."
        f"You have latest knowledge available as of {datetime.datetime.now()}"
    ),
    verbose=True,
    max_iter = 10,
    allow_delegation=True,
    tools=[search_tool,web_site_search],  # Passing human tools to the agent
    llm=llm,
)
writer = Agent(
    role="Tech Content Strategist",
    goal="Craft compelling content on tech advancements",
    backstory=(
        "You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
        "With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
    ),
    max_iter = 10,
    verbose=True,
    allow_delegation=True,
    llm=llm,
)

# Create tasks for your agents
task1 = Task(
    description=('''
        Conduct a comprehensive analysis of the latest advancements in AI in 2024.
        Identify key trends, breakthrough technologies, and potential industry impacts.
        Compile your findings in a detailed report.

        NOTE:
        - Don't revisit the same link
        - Provide URls as references
        - Double Check your Refrences to make sure the Links are working
        - Make Sure to Understand and write only necessary parts in your Research Paper

        '''
    ),
    expected_output='''
    A comprehensive full report on the latest AI advancements in 2024, leave nothing out.
    NOTE:
    Word Limit 250

     -------FORMAT-----------
        {TOPIC}

        {OUTPUT}

        {REFERENCES} '''
    ,
    agent=researcher,
)

task2 = Task(
    description=(
        '''
        Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
        Your post should be informative yet accessible, catering to a tech-savvy audience.
        Aim for a narrative that captures the essence of these breakthroughs and their implications for the future.

        NOTE:
        -  Use This as an reference to your research {researcher's report}
        -  Provide URls as references

        '''
    ),
    expected_output='''
    A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024
    NOTE:
    - WORD LIMIT 300
    - PARAGRAPH 3+
    - Write Style Technical Content Writer

     -------FORMAT-----------
        {TOPIC}

        {OUTPUT}

        {REFERENCES}
    '''
    ,
    agent=writer,
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    verbose=2,
    process=Process.sequential,
)
result = crew.kickoff()
OmarAlsaqa commented 4 months ago

To be honest I am not sure. I see (in the log) that the action is DuckDuckGoSearch then the output is after it.

[DEBUG]: == Working Agent: Senior Research Analyst
 [INFO]: == Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.Identify key trends, breakthrough technologies, and potential industry impacts.Compile your findings in a detailed report.Make sure to check with a human if the draft is good before finalizing your answer.

> Entering new CrewAgentExecutor chain...
Action: DuckDuckGoSearch
Action Input: {
  'search_query': 'latest advancements in AI in 2024'
} 

In 2024, generative AI might actually become useful for the regular, non-tech person, and we are going to see more people tinkering with a million little AI models. State-of-the-art AI models ... Here are some important current AI trends to look out for in the coming year. Reality check: more realistic expectations. Multimodal AI. Small (er) language models and open source advancements. GPU shortages and cloud costs. Model optimization is getting more accessible. Customized local models and data pipelines. Adobe Stock. It's been a year since OpenAI released ChatGPT, opening the door to seamlessly weave AI into the fabric of our daily lives, propelling industries into the future and even prompting ... This year's trends reflect a deepening sophistication and caution in AI development and deployment strategies, with an eye to ethics, safety and the evolving regulatory landscape. Here are the top 10 AI and machine learning trends to prepare for in 2024. 1. Multimodal AI. Multimodal AI goes beyond traditional single-mode data processing to ... Here's how it works. AI in 2024 — the biggest new products and advancements on the way. While 2023 was the year of AI, 2024 will be the year we use it. We have just come to the end of a year ...

Thought: The draft might be done, I need to check if it's good with a human expert
Action: HumanInputTool
Action Input: {
  'query': 'Could you take a look at this draft and tell me if it's good?'
}Thought: 
Action: HumanInputTool
Action Input: {
  'query': 'Could you please check if this draft is good?'
}

Could you please check if this draft is good?
Yes

image Also on LangSmith, there is no tokens next to duckduckgo_search, while the tokens only is counted next to CrewAgentExecutor (inside it you can see ChatGoogleGenerativeAI).: image But for duckduckgo_search there is no Google inside it: image

Maybe someone can confirm for both of us.

punitchauhan771 commented 4 months ago

can you ask your agent to add reference urls for your research papers that they went through or used (duckduckgo reference links) and check whether they are valid links or not? i did that in my code as mentioned above it gave me wrong urls as references and the output was also based on gemini's output, since according to the report the latest llm model was gpt 4 and didn't mentioned anything about claude or dbrx etc

OmarAlsaqa commented 4 months ago

I replaced DuckDuckGoSearchRun with DuckDuckGoSearchResults, which returns the source link as mentioned in LangChain DuckDuckGo docs link:

@tool('DuckDuckGoSearch')
def search_tool(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchRun().run(search_query)

to be:

@tool('DuckDuckGoSearch')
def search_tool(search_query: str):
    """Search the web for information on a given topic"""
    return DuckDuckGoSearchResults().run(search_query)

Here is a screenshot of the output: image

the log as text:

[DEBUG]: == Working Agent: Senior Research Analyst
 [INFO]: == Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.Identify key trends, breakthrough technologies, and potential industry impacts.Compile your findings in a detailed report.Make sure to check with a human if the draft is good before finalizing your answer.

> Entering new CrewAgentExecutor chain...
Action: DuckDuckGoSearch
Action Input: {"search_query": "AI advancements in 2024"} 

[snippet: In 2024, generative AI might actually become useful for the regular, non-tech person, and we are going to see more people tinkering with a million little AI models. State-of-the-art AI models ..., title: What's next for AI in 2024 | MIT Technology Review, link: https://www.technologyreview.com/2024/01/04/1086046/whats-next-for-ai-in-2024/], [snippet: Here are some important current AI trends to look out for in the coming year. Reality check: more realistic expectations. Multimodal AI. Small (er) language models and open source advancements. GPU shortages and cloud costs. Model optimization is getting more accessible. Customized local models and data pipelines., title: The most important AI trends in 2024 - IBM Blog, link: https://www.ibm.com/blog/artificial-intelligence-trends/], [snippet: At an event in San Francisco in November, Sam Altman, the chief executive of the artificial intelligence company OpenAI, was asked what surprises the field would bring in 2024. Online chatbots ..., title: How 2024 Will Be A.I.'s 'Leap Forward' - The New York Times, link: https://www.nytimes.com/2024/01/08/technology/ai-robots-chatbots-2024.html], [snippet: As 2024 unfolds, we will see monumental leaps in AI capabilities, especially in areas demanding complex problem-solving fueled by quantum advancements. 4. AI Legislation, title: The 5 Biggest Artificial Intelligence Trends For 2024 - Forbes, link: https://www.forbes.com/sites/bernardmarr/2023/11/01/the-top-5-artificial-intelligence-trends-for-2024/]

Thought: This contains lots of good info but I am not sure if I got all the latest and detailed info I need yet.
Action: HumanInputTool
Action Input: {"query": "Can you give me the latest advancements in AI for 2024? Include as many details as possible."}

Can you give me the latest advancements in AI for 2024? Include as many details as possible.

As you can see it retrieved the URL.