Closed Bakeerov closed 12 months ago
Are you gonna make a PR?
I have made the small change to accept model as an argument to chat_with_ai, I hope that was the subject of the issue. However, it is still in a sense hardcoded because in this part:
with Spinner("Thinking... "):
assistant_reply = chat_with_ai(
self,
self.system_prompt,
self.triggering_prompt,
self.full_message_history,
self.memory,
cfg.fast_token_limit,
cfg.fast_llm_model
)
...
We hardcode cfg.fast_llm_model
as the model argument, whilecfg.fast_llm_model
is defined in config and can be switched to gpt-4 with the appropriate runtime argument, which happens in configurator.py:
if gpt3only:
logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED")
CFG.set_smart_llm_model(CFG.fast_llm_model)
if gpt4only:
logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED")
CFG.set_fast_llm_model(CFG.smart_llm_model)
I feel like this is a bit counterintuitive and it would probably make more sense to keep fast and smart models hardcoded to gpt-3.5 and gpt-4 accordingly, but introduce new config options such as "main_model" and "agent_model", use them in code, and change them based on the arguments. I can try to work on that if necessary.
Is there already a PR for this? I just read the codebase and wish to contribute. What is your current progress on this feature @r1p71d3 ? If you are on it, I might not do the work twice :-) Maybe I can help though.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
https://github.com/Significant-Gravitas/Auto-GPT/blob/b84de4f7f89b95f176ebd0b390c60198acfa8bf9/autogpt/chat.py#L78