simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
218 stars 20 forks source link

Add a bunch of options #3

Closed simonw closed 10 months ago

simonw commented 1 year ago

Without options we are stuck with the defaults: https://docs.gpt4all.io/gpt4all_python.html#generation-parameters

def generate(
    prompt,
    max_tokens=200,
    temp=0.7,
    top_k=40,
    top_p=0.1,
    repeat_penalty=1.18,
    repeat_last_n=64,
    n_batch=8,
    n_predict=None,
    streaming=False,
):

That max_tokens=200 is particularly limiting.

Note that n_predict is a duplicate of max_tokens (for backwards compatibility) so I can ignore that one.