assafelovic / gpt-researcher

LLM based autonomous agent that conducts local and web research on any topic and generates a comprehensive report with citations.
https://gptr.dev
Apache License 2.0
14.85k stars 1.99k forks source link

unlogged error while using groq #794

Open elchananvol opened 2 months ago

elchananvol commented 2 months ago

The problem: I'm using Groq as an LLM provider. When running the code the following message is printed to the console: "⚠️ Error in reading JSON, attempting to repair JSON." In debug mode, I discovered that the real issue is: "'Unable to import langchain-groq. Please install with pip install -U langchain-groq.'" the code:

async def get_report(query: str, report_type: str, tone) -> str:
    researcher = GPTResearcher(query, report_type, tone, )
    research_result = await researcher.conduct_research()
    report = await researcher.write_report()
    return report

Suggestion: In the "choose_agent" function in "action.py", the exception message should be logged.

danieldekay commented 2 months ago

did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. create_chat_completionin llm.py, if not already in get_llm

elchananvol commented 2 months ago

did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. create_chat_completionin llm.py, if not already in get_llm

Nope, i got directly exception

ElishaKay commented 2 months ago

Thanks for the heads up @elchananvol

@danieldekay we need to think through how we implement this.

Some context: a week or two ago we sliced the main requirements.txt (which cut the GPTR Docker Image by 87%)

These were the dependencies before the official slicing:

https://github.com/assafelovic/gpt-researcher/blob/5dba221ddf93d2b1f1208e081c4e8aa3a7d2fe55/requirements.txt

Most of the default requirements that were sliced were related to custom LLMs (which were supported by custom langchain libraries) & custom retrievers

Maybe we should run some logic based on the .env file when the server starts up?

For example, if the .env states that the retriever or LLM is anything other than the default, print an error message with the dependencies that the user should install globally? Or even better, install the required dependency on server-start up based on the config in the .env file?

Happy to hear your thoughts @danieldekay @elchananvol or even better, to see a pull request 🤠

danieldekay commented 2 months ago

Just an idea.

Poetry could have different groups for different custom models, and the package installation could parse the .env file for the activated model, and then include the corresponding group.