Closed technicianted closed 10 months ago
Slightly change the prompt to have the model to do full generation in case of larger requested max_tokens.
max_tokens
@michaeltremeer
Nice, have tested this across most models and results look good enough in testing (10 requests were made for each combination of model/token config)
Slightly change the prompt to have the model to do full generation in case of larger requested
max_tokens
.@michaeltremeer