Open GhostBP112 opened 1 month ago
Hi,
Thanks for your interest in SlideDeck AI.
There is no "plan" as such for this. However, the use of local LLMs has been in "thoughts" lately.
Regarding the speed, token generation with Mistral Nemo appears to take longer, yes. I have been contemplating to switch back to Mistral or at least provide it as an alternative.
Let me create some tasks toward this general direction.
Is it planned or possible to use a local LLM for processing? I would see this variant as a possibility to significantly increase the generation speed (if the appropriate hardware is available) and also the possibility to use the model offline.