Pythagora-io / gpt-pilot

The first real AI developer
Other
30.46k stars 3.05k forks source link

Dynamic model selection/config #120

Closed nalbion closed 11 months ago

nalbion commented 1 year ago

@pryh4ck

nalbion commented 12 months ago

In ChatGPT it's possible to switch between 2 models:

For our purposes, "chat" and "code" may be more appropriate as there seems to be an emerging trend with Lemur and Llama etc

Our configuration would also need to be able to select the provider - OpenAI, OpenRouter, LocalLLM...

agents/base.py

    def llm(self) -> ChatModelInfo:
        """The LLM that the agent uses to think."""
        llm_name = self.config.smart_llm if self.big_brain else self.config.fast_llm
        return OPEN_AI_CHAT_MODELS[llm_name]

Also, in each create_chat_completion() call it's possible to specify the model or it can also be be provided by the prompt: ChatSequence parameter:

def create_chat_completion(
    prompt: ChatSequence,
    model: Optional[str] = None,
    ...
) -> ChatModelResponse:

    if model is None:
        model = prompt.model.name
nalbion commented 11 months ago

LiteLLM's Model Aliases would be handy - the application/agents could refer to models by alias, which could then be customised by the user through a .env or .yaml config file

https://docs.litellm.ai/docs/completion/model_alias (See also PR #40)

LeonOstrez commented 11 months ago

OpenAI just announced many changes which makes things much easier so there is currently no need to split tasks to different models. For now, we are removing that from our roadmap