Closed nalbion closed 11 months ago
In ChatGPT it's possible to switch between 2 models:
For our purposes, "chat" and "code" may be more appropriate as there seems to be an emerging trend with Lemur and Llama etc
Our configuration would also need to be able to select the provider - OpenAI, OpenRouter, LocalLLM...
agents/base.py
def llm(self) -> ChatModelInfo:
"""The LLM that the agent uses to think."""
llm_name = self.config.smart_llm if self.big_brain else self.config.fast_llm
return OPEN_AI_CHAT_MODELS[llm_name]
Also, in each create_chat_completion()
call it's possible to specify the model
or it can also be be provided by the prompt: ChatSequence
parameter:
def create_chat_completion(
prompt: ChatSequence,
model: Optional[str] = None,
...
) -> ChatModelResponse:
if model is None:
model = prompt.model.name
LiteLLM's Model Aliases would be handy - the application/agents could refer to models by alias, which could then be customised by the user through a .env
or .yaml
config file
https://docs.litellm.ai/docs/completion/model_alias (See also PR #40)
OpenAI just announced many changes which makes things much easier so there is currently no need to split tasks to different models. For now, we are removing that from our roadmap
@pryh4ck