Currently we decide which llm we use (local, remote) via python class LlmConfigOptions in configs/configurator.py. Imo this should be configurable in the config.yaml.
[ ] Add entry in config.yaml to set, which model should be used
[ ] Upon initialization of the app this setting should be fetched from config.yaml and if it's local the necessary containers should be launched
Currently we decide which llm we use (local, remote) via python class
LlmConfigOptions
inconfigs/configurator.py
. Imo this should be configurable in theconfig.yaml
.config.yaml
to set, which model should be usedconfig.yaml
and if it's local the necessary containers should be launched