Open elchananvol opened 3 months ago
did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. create_chat_completion
in llm.py, if not already in get_llm
did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g.
create_chat_completion
in llm.py, if not already inget_llm
Nope, i got directly exception
Thanks for the heads up @elchananvol
@danieldekay we need to think through how we implement this.
Some context: a week or two ago we sliced the main requirements.txt (which cut the GPTR Docker Image by 87%)
These were the dependencies before the official slicing:
Most of the default requirements that were sliced were related to custom LLMs (which were supported by custom langchain libraries) & custom retrievers
Maybe we should run some logic based on the .env file when the server starts up?
For example, if the .env states that the retriever or LLM is anything other than the default, print an error message with the dependencies that the user should install globally? Or even better, install the required dependency on server-start up based on the config in the .env file?
Happy to hear your thoughts @danieldekay @elchananvol or even better, to see a pull request 🤠
Just an idea.
Poetry could have different groups for different custom models, and the package installation could parse the .env file for the activated model, and then include the corresponding group.
The problem: I'm using Groq as an LLM provider. When running the code the following message is printed to the console: "⚠️ Error in reading JSON, attempting to repair JSON." In debug mode, I discovered that the real issue is: "'Unable to import langchain-groq. Please install with pip install -U langchain-groq.'" the code:
Suggestion: In the "choose_agent" function in "action.py", the exception message should be logged.