Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
https://useanything.com
MIT License
15.35k stars 1.6k forks source link

[BUG]: Embedded chat uses model from embedded workspace Chat Settings, but provider from main LLM Preferences. #1439

Closed atljoseph closed 2 weeks ago

atljoseph commented 2 weeks ago

How are you running AnythingLLM?

Docker (local)

What happened?

Embedded chat should respect the rules of the workspace.

IMG_1786 IMG_1787 IMG_1789 IMG_1788

Are there known steps to reproduce?

Described above. Should either fallback to base app settings LLM provider and model OR respect the workspace provider and model. Doing halfway of both of those things does not work out well, as seen above.