ollama / ollama-python

Ollama Python library
https://ollama.com
MIT License
4.08k stars 342 forks source link

System Message causing no answer from Assistant #72

Open pedrognsmartins opened 7 months ago

pedrognsmartins commented 7 months ago

Hello all,

I´m trying to use the system message as described below. Evertytime I use it I don´t have any answer from the LLM.

    messages = [
        {'role': 'system', 'content': f'"{self.role}"'},
        {'role': 'user', 'content': f'"{message}"'},
    ]
    return await client.chat(model=model, messages=messages,)

I was trying to find if there is any issue reported but I didn´t found it. Can someone help me on this ?

Thanks

connor-makowski commented 7 months ago

For general use as shown in most examples, you should have a local ollama server running to be able to continue.

To do this:

connor-makowski commented 7 months ago

This is verbage as part of the PR: https://github.com/ollama/ollama-python/pull/64

connor-makowski commented 7 months ago

It is also worth noting that you are using an await. Are you using an async client?

For a non async client you do not need await:

import ollama
response = ollama.chat(model='llama2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])

For an async client, you should use an await.

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama2', messages=[message])

asyncio.run(chat())
pedrognsmartins commented 7 months ago

@connor-makowski Thanks for your feedback. I used both solution (sync and Async clients). the problem is that when assuming a message with Role: System, LLM is not giving answer.

mxyng commented 7 months ago

what model are you using?

your snippet doesn't stream. is it possible the llm is responding but hasn't completed yet? in this mode, ollama will wait until it has the full response before returning to the call. this could look like non-response if it's also generating tokens at a slow rate (due to hardware limitations)