Every LLM completion is passed a set of parameters in the CompletionOptions object.
We currently support common settings like max_tokens, temperature, top_p, top_k, frequency_penalty, and presence_penalty, but are missing things like tail-free sampling or certain mirostat parameters.
Some model providers, like llama.cpp will accept these, and so it is only a matter of allowing the parameter to be passed in.
Many model providers (all are in the core/llm/llms folder) have a function called _convertArgs that turns the CompletionOptions object into the request body expected by their API. For the providers taht support this parameter, make sure that it gets passed to the request. For other providers, take a look to make sure that this extraneous parameter doesn't get sent in the request.
Every LLM completion is passed a set of parameters in the
CompletionOptions
object.We currently support common settings like max_tokens, temperature, top_p, top_k, frequency_penalty, and presence_penalty, but are missing things like tail-free sampling or certain mirostat parameters.
Some model providers, like llama.cpp will accept these, and so it is only a matter of allowing the parameter to be passed in.
CompletionOptions
to have the parametercore/llm/llms
folder) have a function called_convertArgs
that turns theCompletionOptions
object into the request body expected by their API. For the providers taht support this parameter, make sure that it gets passed to the request. For other providers, take a look to make sure that this extraneous parameter doesn't get sent in the request.