As a user I tend to like to switch between LLMs often (this includes switching between Ollama models and OpenAI + Anthropic external models), and I currently find it very confusing / impossible to predict which of my LLMs are going to be chosen for a thread.
LLM can be set in the Instance settings
LLM can be set in the workspace settings
LLM can be set in the workspace settings (but specific for agents)
As far as I can tell, if I change the chat agent in my workspace, the existing threads remain unaffected, which "feels" like a bug to me. I would expect (where possible) context to be carried over to the newly chosen LLM.
Ideally, like in various apps: ChatGPT or TypingMind, the LLM can be chosen inside the chat context/thread. For example, here are a few UX experiences I find to be very beneficial:
What would you like to see?
As a user I tend to like to switch between LLMs often (this includes switching between Ollama models and OpenAI + Anthropic external models), and I currently find it very confusing / impossible to predict which of my LLMs are going to be chosen for a thread.
As far as I can tell, if I change the chat agent in my workspace, the existing threads remain unaffected, which "feels" like a bug to me. I would expect (where possible) context to be carried over to the newly chosen LLM.
Ideally, like in various apps: ChatGPT or TypingMind, the LLM can be chosen inside the chat context/thread. For example, here are a few UX experiences I find to be very beneficial:
Just my 2 cents. Keep up the great work!