Closed ki-ri closed 2 days ago
Looks like the LLM is configured correctly for chat_completion (since that is a requirement for the auto-agent to be generated as seen in your screenshot), but perhaps the embedding stage is not configured correctly.
We'll need to create a separate function for testing just the embedding stage based on config variables - something similar to this for a an isolated embedding function.
In the meantime, check if your llama.cpp server has an embedding model added to it - the current GPTR version will try to use the "text-embedding-3-large" embedding model by default.
Describe the bug I am using Custom OpenAI API LLM(llama.cpp Server) as LLM according to the manual below, when I searched something I got the error. (I tested the llama.cpp Server, and it worked well independently) https://docs.gptr.dev/docs/gpt-researcher/llms/llms#custom-openai-api-llm
error:
LLM config
specify the custom OpenAI API llm model
FAST_LLM="openai:gpt-4o-mini"
specify the custom OpenAI API llm model
SMART_LLM="openai:gpt-4o" DOC_PATH=./my-docs