assafelovic / gpt-researcher

LLM based autonomous agent that conducts local and web research on any topic and generates a comprehensive report with citations.
https://gptr.dev
Apache License 2.0
15k stars 2.01k forks source link

Error occurred when using Custom OpenAI API LLM #957

Closed ki-ri closed 2 days ago

ki-ri commented 3 weeks ago

Describe the bug I am using Custom OpenAI API LLM(llama.cpp Server) as LLM according to the manual below, when I searched something I got the error. (I tested the llama.cpp Server, and it worked well independently) https://docs.gptr.dev/docs/gpt-researcher/llms/llms#custom-openai-api-llm

specify the custom OpenAI API llm model

FAST_LLM="openai:gpt-4o-mini"

specify the custom OpenAI API llm model

SMART_LLM="openai:gpt-4o" DOC_PATH=./my-docs



**To Reproduce**
- please see the information above

**Expected behavior**
- searched result displayed

**Screenshots**
- terminated unsuccessfully, and got no report.
<img width="1186" alt="image" src="https://github.com/user-attachments/assets/03132e61-8a4a-4c2a-bc69-a7e34fa91fd8">

**Desktop (please complete the following information):**
 - OS: macOS 13.5
 - Browser chrome
ElishaKay commented 2 weeks ago

Looks like the LLM is configured correctly for chat_completion (since that is a requirement for the auto-agent to be generated as seen in your screenshot), but perhaps the embedding stage is not configured correctly.

We'll need to create a separate function for testing just the embedding stage based on config variables - something similar to this for a an isolated embedding function.

In the meantime, check if your llama.cpp server has an embedding model added to it - the current GPTR version will try to use the "text-embedding-3-large" embedding model by default.