mckaywrigley / chatbot-ui

Come join the best place on the internet to learn AI skills. Use code "chatbotui" for an extra 20% off.
https://JoinTakeoff.com
MIT License
28.4k stars 7.9k forks source link

Local model output is terrible #1271

Closed PrakrutR closed 7 months ago

PrakrutR commented 8 months ago

I am using ollama to run local models the output through chat is just horrible but the output through ollama is great Example below-

Both have the same prompt "Why is the sky blue?"

ChatbotUI image

Vs Ollama API

The sky appears blue because of a phenomenon called Rayleigh scattering. When sunlight enters Earth's atmosphere, it encounters tiny molecules of gases
such as nitrogen and oxygen. These molecules scatter the light in all directions, but they scatter shorter (blue) wavelengths more than longer (red)
wavelengths. This is known as Rayleigh scattering.

As a result of this scattering, the blue light is dispersed throughout the atmosphere, giving the sky its blue color. The red light, on the other hand,
is able to travel longer distances without being scattered, which is why the sun appears red at sunset.

Other factors can also affect the appearance of the sky, such as the presence of aerosols (small particles in the air) or water vapor, which can
scatter light in different ways and change the apparent color of the sky. However, Rayleigh scattering is the main cause of the blue color of the sky
that we see.
Bortus-AI commented 8 months ago

Are you running the latest ollama?

Bortus-AI commented 8 months ago

Running latest chatbot-ui and ollama I get image

PrakrutR commented 8 months ago

Are you running the latest ollama?

Yes I am AFAIK

@Bortus-AI I think there's something wrong with my setting I am just trying to figure out what has gone wrong

Bortus-AI commented 8 months ago

I would start by updating Ollama server and then testing again. If still is an issue also update the model and show your settings you have set.

mckaywrigley commented 8 months ago

I've seen this happen to some people running older version of Ollama.

I haven't been able to replicate the issue with its latest release.

If anyone can confirm they are still experiencing this issue on Ollama's latest release, please chime in.

This is obviously a core issue we want to resolve ASAP.

PrakrutR commented 8 months ago

Updating did not fix the issue but using a local build did The instructions for a local build are here https://github.com/ollama/ollama?tab=readme-ov-file#building

zacharytamas commented 7 months ago

I am seeing this as well with the latest versions of chatbot-ui and Ollama. I will do some more investigation, though.

Update: I was seeing this with yesterday's version of Ollama but a new patch version was released earlier today that improves the situation.

I am seeing pretty consistently though that the first few characters of responses from Ollama end up missing in chatbot-ui, strangely. So I'll get things like d be glad to help you with that or re welcome!. Considering both of those split at ' in contractions, maybe there's something to do with single quote marks. It's odd.

zacharytamas commented 7 months ago

Here's another example with a question my 5 year old asked:

IMG_1180

I suspect that it has something to do with how chatbot-ui is putting together the streamed results from Ollama because the issue is that chunks seem to be missing. Note how things like "human nostril" ended up as "humril".

zacharytamas commented 7 months ago

I have opened #1378, which I believe resolves this problem. Ollama's streaming response can sometimes send a chunk with more than one JSON object in it and when it does so they are new-line delimited. The prior chatbot-ui code was attempting to parse the whole chunk as JSON and in the case of a chunk with multiple JSON objects this would result in a string which was not actually valid JSON. A try caught this error but it would result in all of the tokens in the chunk being lost, leaving the gaps in response messages that were visible both my and @PrakrutR's screenshots.

My theory for why this was intermittent and would seem to improve with Ollama upgrades is that the upgrades were slightly a red herring. Depending on the compute power of the machine and the model being used for inference, it could be unpredictable whether multiple tokens could be generated within Ollama's buffer window.

mckaywrigley commented 7 months ago

I tried chatting for like 15min on different models with no issue so looks great.

If it pops back up we can revisit - great work!