Open jfmainville opened 3 months ago
I agree. I was frequently very annoyed to see my chat completions abruptly stop until I figured out that I just needded to increase max_tokens
my exactly first interaction was cut and it took me a while to understand why
Currently, the
max_tokens
value is set to300
in the default configuration file (config.lua
) which causes a high risk of answers from being cutoff when interacting with ChatGPT a model. In that regard, I was wondering if we could increase themax_tokens
value to4096
to reduce this risk?Also, as the default model is
gpt-3.5-turbo
at the moment, which supports up to4096
tokens by default (reference), it would make the process more convenient to new users. This action could also be done for the other available actions likecode_readability_analysis
andcode_completion
for example. We could standardize the definition of themax_tokens
attribute across all available actions and models.