assafelovic / gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
MIT License
13.05k stars 1.61k forks source link

document the meaning/impact of configuration options on report quality #522

Open barsuna opened 1 month ago

barsuna commented 1 month ago

Having done some testing i wonder how does one influence the quality of the report via configuration, what are best practices if any

I.e. what is the impact of various knobs available in the config:

One of the highly desirable outcomes i'm looking for is how to control the level of conversation in the reports - i.e. for beginner in the given field vs for the expert in the given field. It seems even though the system prompt always says the model is the expert etc - a lot of general fluff is scooped up by search, i wonder if there is good way or best practice to influence that, perhaps adding some qualifications to the topic, to avoid trivialities etc. Or perpaps have a different sets of prompts for different target audiences... or add extra llm call to classify retrieved content on intro/middle/expert level or perhaps let the user curate subtopics / queries / search results .. before accepting them into 'processing'

Thank you for the awesome project!

ElishaKay commented 1 month ago

@barsuna great energies and direction

Try Lang Smith:

https://smith.langchain.com/

I know at least for the multi_agents feature, as long as you add a langchain api key to your .env file, you'll get a rich log of the input and output for every step of the backend server process (pretty awesome)

@hwchase17 are those Langsmith logs out of the box for every gptResearcher report type or just specific ones?