Closed lxy009 closed 2 months ago
For those working on this, we should consider recent changes that keeps the conversation history "in memory" because every LLM provider will have different token limits on how much can be pushed. It might make sense to add this token limit into some required configuration parameter whenever someone builds an LLM integration.
The idea here is to create enough abstraction so that a dev could add a connection to Anthropic or Cohere if they so choose.
In the README, we have stated the intent of devs to use Github Submodules. Therefore the actual integration layer can reside in other repositories. Therefore we need to provide the consistent system of usage within this codebase and the abstraction layer of the LLM.
Right now we have this in the
AIConfigurator
class but hardcoded the integration with OpenAI and Gemini.This task is scoped to a basic chatbot implementation.