Closed joseiriarte1982 closed 4 days ago
Good points. The problem with working with local LLMs is - the context window is pretty small, and if we keep giving conversation history as context, the response quality might take a hit. I can add something that will store only the last N conversations, for example, a small number like N==3. The other one about writing model info is doable, but we already have that information ready from the settings page.
v1.1.3 should address this.
A have a local LLM or better said, I am working with Llama.cpp, the thing is that I open the file I will be making my prompts, but since I am working with the server, each prompt is taken as if it were the first one. Wouldn't it be nice is you could store a localstorage or similar db from the current file conversation? This added that you could fill or write the new line with the name of the model, or whatever the LLM is chosen to be called. I don't know much about how obsidian works, I am a web developer, I was checking the main.js , and I think the code is easy to understand. So maybe I could help with your guidance. Saludos