Embedded chat should respect the rules of the workspace.
Are there known steps to reproduce?
Described above. Should either fallback to base app settings LLM provider and model OR respect the workspace provider and model. Doing halfway of both of those things does not work out well, as seen above.
How are you running AnythingLLM?
Docker (local)
What happened?
Embedded chat should respect the rules of the workspace.
Are there known steps to reproduce?
Described above. Should either fallback to base app settings LLM provider and model OR respect the workspace provider and model. Doing halfway of both of those things does not work out well, as seen above.