Renset / macai

Swift powered native macOS client for Ollama, ChatGPT and compatible API-backends
https://renset.gumroad.com/l/macai
MIT License
160 stars 16 forks source link

[QUESTION] API Connection Errors #32

Closed exdysa closed 1 hour ago

exdysa commented 1 week ago

16 inch 2024 sequoia 15.1.1

I'm running an instance of qwen2.5 that I use in other UIs, but I only get errors in macai :

API Connection test failed
The operation couldn’t be completed. (macai.APIError error 1.)

And if I try a request to the model :

Error getting message from server. Try again?

loading localhost:11434 in browser returns:

Ollama is running

I'm unsure of what's happening. The instructions recommend a specific model, does this mean your implementation only supports llama3?

Renset commented 1 week ago

Hey @exdysa, what macai version are you using?

I haven't tested other models than llama, but nearly all models supported by ollama should work.

exdysa commented 1 week ago

Version 2.0.0 (2.0.0-alpha.2) zip sha256 45b2100ddf95cb5a54013d136ccf74ff1f202e09a0bbd6d72e0b52f0a2b5be7c specific model is https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/tree/main, though I have a local file

Renset commented 1 week ago

I've just tested this model with the latest build (this will be alpha3) and it worked 🤔

Anyway, in the upcoming build I've improved error handling and you should be able to see error details instead of useless "API Error" text, when testing connection from Settings. I will release this build within 2-3 hours, so you will be able to give it a try and we'll see what happens

Renset commented 1 week ago

@exdysa I've released alpha-3, please try with that build. At least it should show clear error text when testing the API connection. Please let me know if it worked or not for you

exdysa commented 1 week ago

attempt 1: Model not found: {"error":"model \"Qwen2.5-Coder\" not found, try pulling it first"} attempt 2: Model not found: {"error":"model \"qwen2.5-coder\" not found, try pulling it first"} attempt 3: Model not found: {"error":"model \"Qwen2.5-Coder-32-Instruct\" not found, try pulling it first"} attempt 4: Model not found: {"error":"model \"qwen2.5-coder-32-ins-q5_M\" not found, try pulling it first"} attempt 5: Model not found: {"error":"model \"qwen2.5-coder-32-ins-q5_M:latest\" not found, try pulling it first"} attempt 6: Model not found: {"error":"model \"qwen2.5\" not found, try pulling it first"}

i tried more times than this but ill spare you.

It expects a repo clone somewhere? ¯_(ツ)_/¯

Renset commented 1 week ago

@exdysa ah, I think I know what's the problem. First, you have to download the model in ollama using macOS terminal:

ollama pull qwen2.5

Once the model is downloaded, it will be accessible through Ollama API and thus will work inside macai.

exdysa commented 1 hour ago

I'm familiar with ollama create using the -f Modelfile flag in other contexts. As I have an existing copy of this model in my local llama folder, what I should've done was unclear in the instructions due to my expectations.