aorumbayev / autogpt4all

🛠️ User-friendly bash script for setting up and configuring your LocalAI server with the GPT4All for free! 💸
https://aorumbayev.github.io/autogpt4all/
MIT License
452 stars 67 forks source link

Shouldn’t the token limit be reduced #4

Closed watcher60 closed 1 year ago

watcher60 commented 1 year ago

Query rather than an issue, more a suggestion for the env file- should the fast token limit of 4000 be reduced to 2048 when using the GPT4ALL weights and should then Auto-gpt started with the gpt-3.5 flag? Thanks

aorumbayev commented 1 year ago

if you'd find an optimal token limit for the default model thats pulled by the script - please submit a pr to replace the current .env.template (and i think flag is not needed gpt-3.5 is invoked by default)