Discord bot built with Pycord that offers AI chat and image generation inside of Discord. Image generations are saved in cloudinary and hosted online for all to download.
Is your feature request related to a problem? Please describe.
I'm frustrated when internet connectivity issues arise, as it can limit the functionality of Cognibot since it relies on external APIs like OpenAI and Anthropic. In the event of server disruptions or internet outages, I cannot access these models, which diminishes the bot's utility for users.
Describe the solution you'd like
I would like to add the capability for Cognibot to communicate with a local large language model. This feature would ensure that even if Discord or internet services experience downtime, users can still interact with Cognibot. The local model would serve as a fallback option, allowing for continuous availability and interaction without external dependencies.
Describe alternatives you've considered
I have considered implementing a cached response mechanism where responses from the online models are stored and served during outages. However, this would not allow for real-time interactions and could lead to outdated or irrelevant responses. Additionally, I thought about integrating with other available APIs, but that wouldn't address the issue of network failure, as it would still rely on external services.
Additional context
This feature could include commands like /localchat or /offline, indicating that users are now interacting with the local model. I envision a smooth transition where the bot can automatically detect when it cannot reach the external APIs and switch to the local model. There are existing lightweight models that could be hosted locally, ensuring efficiency and responsiveness.
Is your feature request related to a problem? Please describe.
I'm frustrated when internet connectivity issues arise, as it can limit the functionality of Cognibot since it relies on external APIs like OpenAI and Anthropic. In the event of server disruptions or internet outages, I cannot access these models, which diminishes the bot's utility for users.
Describe the solution you'd like
I would like to add the capability for Cognibot to communicate with a local large language model. This feature would ensure that even if Discord or internet services experience downtime, users can still interact with Cognibot. The local model would serve as a fallback option, allowing for continuous availability and interaction without external dependencies.
Describe alternatives you've considered
I have considered implementing a cached response mechanism where responses from the online models are stored and served during outages. However, this would not allow for real-time interactions and could lead to outdated or irrelevant responses. Additionally, I thought about integrating with other available APIs, but that wouldn't address the issue of network failure, as it would still rely on external services.
Additional context
This feature could include commands like
/localchat
or/offline
, indicating that users are now interacting with the local model. I envision a smooth transition where the bot can automatically detect when it cannot reach the external APIs and switch to the local model. There are existing lightweight models that could be hosted locally, ensuring efficiency and responsiveness.