Open underlines opened 1 month ago
You can choose different models at runtime, these settings are what the server will default to in absence of user input.
We've made this clearer in the documentation here - https://r2r-docs.sciphi.ai/cookbooks/basic-configuration#llm-provider-configuration
You should be able to confirm that changing the selected model in the playground does in fact result in that model generating the completion.
Is your feature request related to a problem? Please describe. It isn't a problem, but the R2R-Dashboard provides a drop-down for model selection, at the same time it seems I can't define multiple completions.generation_config.
Describe the solution you'd like I tried providing an array of generation_config:
Which r2r doesn't currently support but could easily be refactored in the current codebase it seems, and exposing the choice to the R2R-Dashboard Playground in the dropdown.
Describe alternatives you've considered Switching models by booting up different configurations or using the CLI, which is cumbersome.
Additional context None