Query rather than an issue, more a suggestion for the env file- should the fast token limit of 4000 be reduced to 2048 when using the GPT4ALL weights and should then Auto-gpt started with the gpt-3.5 flag?
Thanks
if you'd find an optimal token limit for the default model thats pulled by the script - please submit a pr to replace the current .env.template (and i think flag is not needed gpt-3.5 is invoked by default)
Query rather than an issue, more a suggestion for the env file- should the fast token limit of 4000 be reduced to 2048 when using the GPT4ALL weights and should then Auto-gpt started with the gpt-3.5 flag? Thanks