Open Equim-chan opened 1 year ago
My plan is to make the context length a setting the user can customize. With GPT-4 in particular, maxing out the 8K (or 32K) context for every message is probably not needed most of the time.
Yes, having it as a setting would be great.
That would make it possible to prompt close to the possible maximum of tokens, e.g. when you don't expect a long conversation and long responses. E.g. for summaries.
Max prompt tokens is now configurable under Settings -> Chat.
https://github.com/cogentapps/chat-with-gpt/blob/1998a661f805669e6c81ea6c31eadfe9b7cbf75b/app/src/openai.ts#L90
I think it should be 4096 for gpt-3.5-turbo and 8192 for gpt-4?