mckaywrigley / chatbot-ui

Come join the best place on the internet to learn AI skills. Use code "chatbotui" for an extra 20% off.
https://JoinTakeoff.com
MIT License
28.76k stars 8.01k forks source link

Ollama not working #1072

Closed Bortus-AI closed 9 months ago

Bortus-AI commented 10 months ago

I tried the latest ollama commit but still can't get Ollama models to show up in chatbot.

I tried two different setups

  1. Debian server with ollama and chatbot-ui 2.0 running locally
  2. Vercel deployment and setting env to point to my ollama server that accepts remote connection.

Ollama server is confirmed working when using unsaged, ollama-webui and big-AGI

I am unable to see any attempts from chatbot-ui 2.0 to ollama and nothing is showing in the local server logs or in vercel.

image
wrapss commented 10 months ago

see #1062

Bortus-AI commented 10 months ago

see #1062

That fixed it! Thanks so much

image

Bortus-AI commented 10 months ago

@wrapss Models show up now but when trying to chat with them I get

image

wrapss commented 10 months ago

@wrapss Models show up now but when trying to chat with them I get

u launch ollama with arg OLLAMA_ORIGINS=* ?

Bortus-AI commented 10 months ago

@wrapss Models show up now but when trying to chat with them I get

u launch ollama with arg OLLAMA_ORIGINS=* ?

Yes I did. Interesting enough its working with Vercel but not my local instance. Let me double check the file changes to make sure I didn't have a typo

Bortus-AI commented 10 months ago

Well it works with Vercel so at least your fix is working for that. I will redo my local setup after lunch and report back.

Thanks for getting the models to work!

Bortus-AI commented 10 months ago

Seems to be no longer working on Vercel anymore. Not getting models to show up now

Bortus-AI commented 10 months ago

I created a whole new supabase, vercel and reinstalled ollama but models are still not showing up. I can curl and get the model list and chat with it via curl from two different networks so I know its accepting outside connections. Not seeing anything in the logs on vercel or ollama showing that chatbot-ui has attempted to communicate with it.

wrapss commented 10 months ago

Try opening http://ip:3000/api/localhost/ollama. What's the result? (ctrl + f5 for clear cache)

Bortus-AI commented 10 months ago

/api/localhost/ollama

When going there I get {"localModels":[]}

If I curl from a remote computer it is able to list the models so the ollama server is reachable just not via chatbot-ui for some reason. With the latest commit https://github.com/mckaywrigley/chatbot-ui/commit/8da68cece287b99afe1b7da560c5d277bd6738dd the menu option to select local models is gone now.

curl http://xxxxxxxx:11434/api/tags {"models":[{"name":"codellama:13b-code","modified_at":"2024-01-11T13:52:40.378260827-05:00","size":7365960713,"digest":"bcb66db03ddd31b8ce315a0b504764340e6c2cea914d88350b566859738e1953","details":{"format":"gguf","family":"llama","families":null,"parameter_size":"13B","quantization_level":"Q4_0"}},{"name":"codeup:13b","modified_at":"2023-12-12T16:58:21.598996366-05:00","size":7365835291,"digest":"54289661f7a9c9568ec1dac8901ba56d3fb92a43dfa22fd711385ee56006e4e8","details":{"format":"gguf","family":"llama","families":null,"parameter_size":"13B","quantization_level":"Q4_0"}},{"name":"llama2:latest","modified_at":"2024-01-11T13:48:43.432395863-05:00","size":3826793677,"digest":"78e26419b4469263f75331927a00a0284ef6544c1975b826b15abdaef17bb962","details":{"format":"gguf","family":"llama","families":["llama"],"parameter_size":"7B","quantization_level":"Q4_0"}},{"name":"llava:latest","modified_at":"2023-12-13T16:32:37.432027275-05:00","size":4450242073,"digest":"e4c3eb471fd8247a4afb889408cd559aba91bfbdea87c94ffefb2af9787e6bae","details":{"format":"gguf","family":"llama","families":["llama","clip"],"parameter_size":"7B","quantization_level":"Q4_0"}},{"name":"orca-mini:latest","modified_at":"2024-01-11T13:52:29.446174636-05:00","size":1979947443,"digest":"2dbd9f439647093cf773c325b0b3081a11f1b1426d61dee8b946f8f6555a1755","details":{"format":"gguf","family":"llama","families":null,"parameter_size":"3B","quantization_level":"Q4_0"}},{"name":"phi:latest","modified_at":"2023-12-20T10:55:49.671096285-05:00","size":1602472472,"digest":"e22226989b6c4ea90b51e3368c8cbeb6edaddbf0f922c5182aa6384c7670afe7","details":{"format":"gguf","family":"phi2","families":["phi2"],"parameter_size":"3B","quantization_level":"Q4_0"}}]}

wrapss commented 10 months ago

oh but you're in production mode I just understood see https://github.com/mckaywrigley/chatbot-ui/blob/main/app/api/localhost/ollama/route.ts ligne 4

wrapss commented 10 months ago

try to remove the condition if (process.env.NODE_ENV !== "production") {}

wrapss commented 10 months ago

@mckaywrigley is this condition really necessary?

Bortus-AI commented 10 months ago

try to remove the condition if (process.env.NODE_ENV !== "production") {}

I changed production to test in that file vs deleting it and I now can see the models but not getting a response

image

wrapss commented 10 months ago

do you see the chat request arriving on the ollama logs? or do you only see the tag request?

BernieCr commented 10 months ago

The way it is currently implemented, it can't work on a hosted version, even if you remove the (process.env.NODE_ENV !== "production") condition. The problem is that the request to the local Ollama server is proxied through a Next.js route. When this route is called on a hosted instance, it runs on the server and then of course it can't access the Ollama server, which is running locally on the client machine.

I think it is a valid use case to be able to use local Llama models even on a hosted instance. In order to do this, the code from app/api/localhost/ollama/route.ts would simply have to be moved to the client side. (Proxying the request seems unnecessary to me anyway. It adds no security, and for the actual chat the calls to the Ollama server are already made on the client side.)

Edit: Made a PR for this. 😊

Bortus-AI commented 10 months ago

@BernieCr That works if I have ollama running on my local pc but not my hosted server. I have a remote dedicated server that is running chatbot-ui and ollama on the same server, which I then access via IP or domain to access chatbot-ui from my laptop. I have tried to use the localhost address, domain etc for ollama but with this new pull request I can't get the local models option to show up in the chatbot ui anymore. Ollama is accessible from all networks but chatbot-ui isn't showing it as an option

image

Bortus-AI commented 10 months ago

It would be great to be able to use a hosted version of ollama and be able to access it. I have a dual gpu server that I use as its much more powerful than running a local ollama on my laptop

BernieCr commented 10 months ago

@Bortus-AI Is Ollama instance on your server accessible from your local machine? E.g. if you are running Ollama on the standard port, http://your-server-ip-or-domain:11434/api/tags should return a JSON with your Ollama models. If that works, you should make sure that you set the NEXT_PUBLIC_OLLAMA_URL env variable of your hosted chatbot-ui instance to something like http://your-server-ip-or-domain:11434.

If the models still don't show up in chatbot-ui, you can open the devtools network tab in the browser. When the chatbot-ui app starts, it it should make one request to the URL you set in the env variable. If the request doesn't go through it will likely be some CORS issue, but the devtools error message should provide a clue.

david3xu commented 9 months ago

@Bortus-AI Is Ollama instance on your server accessible from your local machine? E.g. if you are running Ollama on the standard port, http://your-server-ip-or-domain:11434/api/tags should return a JSON with your Ollama models. If that works, you should make sure that you set the NEXT_PUBLIC_OLLAMA_URL env variable of your hosted chatbot-ui instance to something like http://your-server-ip-or-domain:11434.

If the models still don't show up in chatbot-ui, you can open the devtools network tab in the browser. When the chatbot-ui app starts, it it should make one request to the URL you set in the env variable. If the request doesn't go through it will likely be some CORS issue, but the devtools error message should provide a clue.

Yes. this is my case. I can get the model information from remote website 'NEXT_PUBLIC_OLLAMA_URL=https://xxxxx-11434.auc1.devtunnels.ms/api/tags', but when I run the app, it gives me an error "Access to fetch at 'https://xxxxx-11434.auc1.devtunnels.ms/api/tags' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." Do you have any idea how to fix this? Thank you.

david3xu commented 9 months ago

@Bortus-AI Is Ollama instance on your server accessible from your local machine? E.g. if you are running Ollama on the standard port, http://your-server-ip-or-domain:11434/api/tags should return a JSON with your Ollama models. If that works, you should make sure that you set the NEXT_PUBLIC_OLLAMA_URL env variable of your hosted chatbot-ui instance to something like http://your-server-ip-or-domain:11434. If the models still don't show up in chatbot-ui, you can open the devtools network tab in the browser. When the chatbot-ui app starts, it it should make one request to the URL you set in the env variable. If the request doesn't go through it will likely be some CORS issue, but the devtools error message should provide a clue.

Yes. this is my case. I can get the model information from remote website 'NEXT_PUBLIC_OLLAMA_URL=https://xxxxx-11434.auc1.devtunnels.ms/api/tags', but when I run the app, it gives me an error "Access to fetch at 'https://xxxxx-11434.auc1.devtunnels.ms/api/tags' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." Do you have any idea how to fix this? Thank you.

@BernieCr

BernieCr commented 9 months ago

There is an env variable for Ollama to allow CORS, try to set: OLLAMA_ORIGINS=*  (On the server running Ollama)

Also make sure not to use Firefox, it always blocked the request regardless of the CORS settings. It works fine in Chrome and Safari though, so I didn‘t investigate any further.

david3xu commented 9 months ago

@BernieCr Thank you. I fixed it. Actually, it is not CORS problem. But I replace all the 'localhost' to my private 'IPv4 Address'. It works. I remote it through github (vscode) to a laptop.