Open codebrain001 opened 3 days ago
why not use a second task? An agent can have multiple tasks assigned, for example:
from crewai import Agent, Task, Crew
from dotenv import load_dotenv
from langchain_groq import ChatGroq
from pydantic import BaseModel, Field
load_dotenv()
llm = ChatGroq(
temperature=1,
model="llama3-70b-8192",
# api_key="" # Optional if not set as an environment variable
)
# Define a Pydantic model for the structured output
class AITrendsReport(BaseModel):
top_trends: List[str] = Field(description="List of top AI trends")
impact_areas: List[str] = Field(description="Areas most impacted by AI")
future_prediction: str = Field(description="Prediction for AI in the next 5 years")
# Create an agent
ai_analyst = Agent(
role='AI Trend Analyst',
goal='Analyze and report on AI trends and their impact',
backstory='You are an experienced AI analyst with deep knowledge of the field.',
llm=llm
)
# Task 1: Save output as a normal text file
task1 = Task(
description='Write a brief history of AI development',
expected_output='A chronological summary of key AI milestones',
agent=ai_analyst,
output_file='ai_history.txt' # This will save as a plain text file
)
# Task 2: Save output as a file of Pydantic type
task2 = Task(
description='Analyze current AI trends and their potential impact',
expected_output='A structured report on current AI trends and their implications',
agent=ai_analyst,
output_pydantic=AITrendsReport, # Use the Pydantic model for structured output
output_file='ai_trends_report.json' # This will save as a JSON file
)
# Create a crew with these tasks
crew = Crew(
agents=[ai_analyst],
tasks=[task1, task2]
)
# Run the crew
result = crew.kickoff()```
This will output a txt file and json file based on the pydantic model
I have encountered a limitation in the current implementation of agentic tasks: they are only capable of producing a single output file. In certain scenarios, an agent might need to generate multiple files as output, which is not currently supported.
Consider a scenario where an agent is tasked with processing a dataset and generating both a summary report and a pydantic model of the data