Closed elchananvol closed 1 month ago
Sup @elchananvol
For the detailed report, have a look at prompts.py & chase the relevant function up the stack
For the multi_agents feature, try running it via Docker, like so:
Add a .env file to the root folder with these values:
OPENAI_API_KEY= TAVILY_API_KEY= Get the keys from here: https://app.tavily.com/sign-in https://platform.openai.com/api-keys
Then restart docker with: docker compose up --build
Visit localhost:3000 - both detailed report & multi agents report are available for you there
May the force be with you 🙏
I'm not sure what you mean. Are you saying that "detailed_report" isn't developed yet, or has a different flow than the other report types? Edit: I see in detailed_report.py that it uses subtopic_report. Are they actually two different names for the same thing? @ElishaKay
@ElishaKay I am getting the exact same result regardless of whether I give report_type = "detailed_report"
or report_type = "research_report"
.
I am using it in the following way:
from gpt_researcher import GPTResearcher
import os, asyncio, nest_asyncio
from dotenv import load_dotenv
load_dotenv()
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
TAVILY_API_KEY = os.getenv('TAVILY_API_KEY')
nest_asyncio.apply()
async def get_report(prompt: str, report_type: str) -> str:
researcher = GPTResearcher(query=prompt, report_type=report_type)
await researcher.conduct_research()
report = await researcher.write_report()
return report
if __name__ == "__main__":
report_type = "detailed_report" # research_report
prompt = "What is the outlook on Tesla stock?"
report = asyncio.run(get_report(prompt=prompt, report_type=report_type))
print(report)
But when I use the frontend version, I get different results of course - a 3 page document with the summary report (which is "research report" from the API) option, and a 11 page document with the detailed report option.
Is this not the correct way to use the detailed_report
option? If so, what's the correct way?
Sup @elchananvol & @PrashantSaikia,
I'm working on giving the community access to an AI Dev Team who will hopefully be more competent than myself to answer these types of questions.
If you'd like their answer, please feel free to clone the branch and set your Github access token as an env variable.
The AI dev team will be able to take as input:
And provide a meaningful response base on the branch files.
On 1 foot: The backend research flows were refactored several times as we were learning how to best tame the awesome power of the LLM.
For viewing the logic triggered by the frontend, look at the backend/websocket_manager.py file & follow the functions down the chain.
In general, the multi_agents flow proved to be a favorite therefore it got higher priority - for long reports, you can go with that.
For why the pip package is triggering the same flow for different inputs, the AI dev team will investigate when they're put to work 🤠
@elchananvol detailed report should now work. You can check it out via CLI here: https://docs.gptr.dev/docs/gpt-researcher/getting-started/cli
or here: https://docs.gptr.dev/docs/examples/detailed_report
@assafelovic, I'm currently using gpt_researcher
via pip install, so I don't have access to the detailed report functionality yet. For example, I can't use the following code (same code as cli.py) because I'm unable to import DetailedReport
. Thanks!
from gpt_researcher import DetailedReport
async def get_report(query: str, report_type: str, tone) -> str:
if report_type == 'detailed_report':
detailed_report = DetailedReport(
query=query,
report_type="research_report",
report_source="web_search",
)
report = await detailed_report.run()
else:
researcher = GPTResearcher(query, report_type, tone)
research_result = await researcher.conduct_research()
report = await researcher.write_report()
return report
I'm running the following:
Btw, the same issue occurs with 'multi_agents,' even though I see the beginning of its implementation in websocket_manager.py