Open archvalmiki opened 1 year ago
You could try going and editing the plugin's own code to remove the places where the max_tokens
parameter is. This parameter is no longer mandatory - if you don't need to limit the AI, I think not using max_tokens
will solve this problem.
Currently, specifying that in model doesn't work. Allow this option to be used in conjunction with max tokens to be set to 32,768.