ollama / ollama-python

Ollama Python library
https://ollama.com
MIT License
2.71k stars 223 forks source link

How to set temprature and output token size in the chat mode? #76

Closed jojogh closed 4 months ago

jojogh commented 4 months ago

Is there any examples to guide how to set temprature and output token size in the chat mode?

eliranwong commented 4 months ago

You may read this example:

examples/fill-in-middle/main.py

jojogh commented 4 months ago

thanks first, i have seen this example, it is generate mode, not in chat mode.

eliranwong commented 4 months ago

For example, I use the following for one of my projects:

from ollama import Options
        completion = ollama.chat(
            #keep_alive=0,
            model=config.ollamaDefaultModel,
            messages=[
                *ongoingMessages,
                {
                    "role": "user",
                    "content": prompt,
                },
            ],
            format="json",
            stream=True,
            options=Options(
                temperature=0.0,
                num_ctx=8192,
                num_predict=-1,
            ),
        )
deepcoder commented 4 months ago

and "keep_alive" as well, thanks!

jojogh commented 4 months ago

For example, I use the following for one of my projects:

from ollama import Options
        completion = ollama.chat(
            #keep_alive=0,
            model=config.ollamaDefaultModel,
            messages=[
                *ongoingMessages,
                {
                    "role": "user",
                    "content": prompt,
                },
            ],
            format="json",
            stream=True,
            options=Options(
                temperature=0.0,
                num_ctx=8192,
                num_predict=-1,
            ),
        )

Thanks Wong, I will try. Many thanks.