assafelovic / gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
MIT License
12.96k stars 1.6k forks source link

Different provider for FAST and SMART LLMs #598

Open Speedway1 opened 2 weeks ago

Speedway1 commented 2 weeks ago

For each of the models used, is it possible to specify their own LLM provider?

This is especially helpful with locally hosted models where the "smart" LLM can be hosted on a different server (more RAM, GPU capacity) than with "fast" LLM.

assafelovic commented 2 weeks ago

This is definitely a great and requested feature but will require some heavy changes. Atm the agent assumes a main LLM provider with the options of sub models. Happy to see suggested PRs for this or discuss design around this

Speedway1 commented 2 weeks ago

OK brilliant. I will give it some thought and come back to you on this. Thank you for creating such a brilliant tool. It's absolutely fantastic.