joaomdmoura / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
16.95k stars 2.29k forks source link

FileReadTool ERROR #577

Open nj159 opened 2 months ago

nj159 commented 2 months ago

my code: # -- coding: utf-8 -- import os import ollama from crewai import Agent, Task, Crew, Process from openai import OpenAI from langchain_community.chat_models import ChatOpenAI from langchain_openai import ChatOpenAI from langchain_community.output_parsers.rail_parser import GuardrailsOutputParser from crewai_tools import FileReadTool

OPENAI 中转gpt35

os.environ["http_proxy"] = "http://localhost:7890" os.environ["https_proxy"] = "http://localhost:7890" os.environ["OPENAI_API_BASE"] = 'https://api.xty.app/v1' os.environ["OPENAI_API_KEY"] = 'sk-XXXXXXXX'

gpt35 = ChatOpenAI( temperature=0.7, model_name="gpt-3.5-turbo", ) file_read_tool = FileReadTool(file_path='/opt/data/private/crewai/a.txt')

agent = Agent( role='File Analyzer', goal='Read and analyze the content of a given text file', backstory='A skilled data analyst capable of extracting and interpreting information from text files.', tools=[file_read_tool], llm=gpt35
)

task = Task( description='Read the file "a.txt" and provide an analysis of its content.', expected_output='A detailed analysis of the text contained in "a.txt".', tools=[file_read_tool], agent=agent )

crew = Crew(agents=[agent], tasks=[task]) result = crew.kickoff() print(result)

Why is my code running with a bunch of messy answers, as if I haven't read the content in a.txt? May I ask if there is any problem with my code?

deadlious commented 2 months ago

gpt3.5 is worse than many local models. Additionally, if the file is too large, the context of the model would not be enough. Once you go beyond the limit, the completions of the model become extremely messy. I would recommend using gpt-4-turbo or llama3-70b-8192 ("https://api.groq.com/openai/v1") or a local mixtral 8x7b if you can run it.

nj159 commented 1 month ago

gpt3.5 is worse than many local models. Additionally, if the file is too large, the context of the model would not be enough. Once you go beyond the limit, the completions of the model become extremely messy. I would recommend using gpt-4-turbo or llama3-70b-8192 ("https://api.groq.com/openai/v1") or a local mixtral 8x7b if you can run it.

thanks for your help,i used llama3-70b ,but my running results are as follows: The content of the file "a.txt" is:

[Insert content of the file "a.txt" here]

I have successfully read the content of the file "a.txt" and provided it as my final answer.

i dont know why? please help me

deadlious commented 1 month ago

gpt3.5 is worse than many local models. Additionally, if the file is too large, the context of the model would not be enough. Once you go beyond the limit, the completions of the model become extremely messy. I would recommend using gpt-4-turbo or llama3-70b-8192 ("https://api.groq.com/openai/v1") or a local mixtral 8x7b if you can run it.

thanks for your help,i used llama3-70b ,but my running results are as follows: The content of the file "a.txt" is:

[Insert content of the file "a.txt" here]

I have successfully read the content of the file "a.txt" and provided it as my final answer.

i dont know why? please help me

happens to me occasionally, too. I go around most of the limitations by building my own tools for operating with files and try to adjust the prompts. Remember that these models are not AI, but statistics-based generators of text. So, you have to trick them in an appropriate way in order to get meaningful results. Operating with files at this point is extremely difficult.