Open rishi8011 opened 3 months ago
Junior dev agent (small code model), senior dev agent (large code model), browser researcher agent (document model), communication agent (psych model). Would like to see the dynamics of at least 4. Does this feels like how LiteLLM and Ollama together can have multiple models running?
We need some sort of refactoring and redesign to support different LLMs for different agents.
I am in general in favor of this idea.
@li-boxuan should we try this it will greatly improve open devin efficiency and easily able to manage task
Supporting different LLMs for different agents is also important for #2363.
Can it be done through the front end using a drop down on a settings page like 'anythingLLM'?
there should be an option that allows users to use the same llm for all the agents or use different llm per agent.
At the moment is it only one dropdown for one single LLM, and even then not every LLM that is on Ollama is there. Would it be slots-based or sets based UX?
I just unassigned myself. I finished the backend part in #2756, and would be great if anyone would like to take the frontend challenge.
Just to clarify, with #2756, and even without any frontend change, you should be able to use different LLMs for different agents. You just need to define it in config.yaml
.
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
@mroch @li-boxuan @jeremi @penberg @JensRoland
integrate a feature that can allow user to use multiple llm models in the project with their special expertise
for example :
when user add 3 LLM models into opendevin with specific usage
first LLM should only be use research and browsing like GPT-3.5, Mixtral ,
second LLM model can be used for code generation like GPT-4o, deepseeker , code llama
third LLM model can be used for any reasoning thinking or any other task or role assign by user like GPT-4o, llama3-70b
user can change the model or role anytime in the middle of project or at beginning to get better control of opendevin workspace and it will greatly reduce API cost and increases it productivity and efficiency