Most local LLM have a context window/token limit of 2048, including the model downloaded in the script. Additionally I have seen it attempt to use GPT-4 for reasoning which appears to be the same as this issue https://github.com/Significant-Gravitas/Auto-GPT/issues/187.
Most local LLM have a context window/token limit of 2048, including the model downloaded in the script. Additionally I have seen it attempt to use GPT-4 for reasoning which appears to be the same as this issue https://github.com/Significant-Gravitas/Auto-GPT/issues/187.