Closed Fuzzillogic closed 7 months ago
To shed some light: without specifying reply in json
, the model will sometimes output whitespace indefinitely.
I have some bug too.
Repro below, hangs after about 20 requests (ollama version 0.1.20 on linux with GPU, as well as on mac m2)
import requests
def query(session):
url = "http://localhost:11434/api/generate"
data = {
"model": "llama2:7b",
"prompt": "Why is the sky blue?",
"stream": False,
"options": { "temperature": 0.8 }
}
with requests.post(url, json=data) as response: # Hangs about every 20 requests
if response.ok:
return response.text
else:
print(response)
return None
def main():
total = 0
errors = 0
with requests.Session() as session:
for _ in range(100):
total += 1
r = query(session)
if r is None:
errors += 1
success_rate = 100*((total - errors)/total)
print(f"{total=} {errors=} {success_rate=:.2f}")
if __name__ == "__main__":
main()
I don't see the json parameter in your example. Without 'json', it has been running smoothly for about 20 hours with around 10k requests and everything's working fine.
ollama version is 0.1.17 ubuntu 22.04
I deserialize response with json loads after response and specify format in prompt with JSON
.
I'm also having this issue with mistral, ollama, json and my m1 32 GB Ventura 13.6 Macbook. I've been working on a summarization script for a few days, had the code working and was solely exiting/rerunning to tweak the prompt to try to improve mistral's output. After one of the exits, I can no longer get mistral to reliably output json at all, it hangs 99% of the time.
Test script from a tutorial I followed when I was trying to wrap my head around the json support:
import requests
import json
import sys
country = "france"
schema = {
"city": {
"type": "string",
"description": "Name of the city"
},
"lat":{
"type": "float",
"description": "Decimal Latitude of the city"
},
"lon":{
"type": "float",
"description": "Decimal Longitude of the city"
}
}
payload = {
"model": "mistral",
"messages": [
{"role": "system", "content": f"You are a helpful AI assistant. The user will enter a country name and the assistant will return the decimal latitude and decimal longitude of the capital of the country. Output in JSON using the schema defined here: {schema}."},
{"role": "user", "content": "japan"},
{"role": "assistant", "content": "{\"city\": \"Tokyo\", \"lat\": 35.6748, \"lon\": 139.7624}"},
{"role": "user", "content": country},
],
"format": "json",
"stream": False
}
response = requests.post ("http://localhost:11434/api/chat", json=payload)
Changing the model to llama2, dolphin-mixtral, etc works. Removing the format: json line works with mistral. And mistral worked with this test code up until yesterday—I'd been testing various prompts with it for a few hours.
Now that it doesn't work, I can no longer get it back to working. It's like it never worked. I have tried: -quitting ollama from the task bar —restarting computer -pip uninstalling/reinstalling the python api —trying this script in a different conda env from the one I was working in —deleting all modelfiles that use mistral and redownloading it. —deleting ollama and reinstalling it.
Really weird
edit: after deleting and re-installing everything at once (previously had only deleted mistral OR ollama), I think I am good to go again
Any insights into what the workaround may be? Seems like a critical issue when using the 'format': 'json'
and Ollama hanging entirely.
Initially I thought it's the switching of models in my code, but the format
is the culprit because running without the format responds quickly as expected (even when switching models).
This bug is a serious blocker, and why does it happen? And what's the potential workaround at the moment before it's fixed? Kinda been bugging me for weeks now.
It's like 2 months now, and everything else seems to get fixed but this bug
It currently renders this tutorial even unusable as it tightly requires the use of the format
in the scoring of retrieved documents.
https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_crag_mistral.ipynb
The above tutorial link is a realistic repro of the issue.
cc @jmorganca
Is the format : json there on default?? Because using langchain and chatollama also hangs even without the format : json. Option ?
Interestingly, as per my comment in the related issue, 2905 , it works first time but hangs on second attempt. That seems odd?
@marklysze Yes, once in a while it works for me too, except it fails way too often, and it's random.
When explicitly adding
"format": "json"
to an api request, the request then never seems to run to completion. In the logs I can see that the model is loaded, but apart from CPU usage to the maximum configured, nothing happens until I abort the request.This hangs:
This works just fine:
The weird thing is, I did got some responses occasionally with
"format": "json"
present, but this example consistently fails.I use the official Docker container. (Using rootless Podman). CPU only. Tested with 0.1.17, 0.1.18 and 0.1.19, on two different machines, one Intel, one AMD, both Kubuntu 23.10, with same results.