guinmoon / LLMFarm

llama and other large language models on iOS and MacOS offline using GGML library.
https://llmfarm.site
MIT License
1.06k stars 64 forks source link

Always crash at 2nd user input #5

Closed williamchai closed 9 months ago

williamchai commented 10 months ago

iPhone 13 Pro iOS 16.3.1 Orca mini 3B downloaded from readme link

Tried different params, use metal or not, always crash. Creating a new chat for each input works, but 2nd input in same chat will crash.

guinmoon commented 10 months ago

iPhone 13 Pro iOS 16.3.1 Orca mini 3B downloaded from readme link

Tried different params, use metal or not, always crash. Creating a new chat for each input works, but 2nd input in same chat will crash.

Thanks for the report. Could you clarify? Do you have the same issue with other models? Does the error occur immediately during the sending of the message or with a delay?

williamchai commented 10 months ago

iPhone 13 Pro iOS 16.3.1 Orca mini 3B downloaded from readme link Tried different params, use metal or not, always crash. Creating a new chat for each input works, but 2nd input in same chat will crash.

Thanks for the report. Could you clarify? Do you have the same issue with other models? Does the error occur immediately during the sending of the message or with a delay?

I also tried vicuna-7b-q4 but it's super slow not usable (for 1st reply). The error occur immediately after I input and click send icon

guinmoon commented 10 months ago

iPhone 13 Pro iOS 16.3.1 Orca mini 3B downloaded from readme link Tried different params, use metal or not, always crash. Creating a new chat for each input works, but 2nd input in same chat will crash.

Thanks for the report. Could you clarify? Do you have the same issue with other models? Does the error occur immediately during the sending of the message or with a delay?

I also tried vicuna-7b-q4 but it's super slow not usable (for 1st reply). The error occur immediately after I input and click send icon

I think I found what the problem is. If you want, you can write to me at guinmoon@gmail.com, I will invite you to early testing in TestFlight.

guinmoon commented 10 months ago

I also tried vicuna-7b-q4 but it's super slow not usable (for 1st reply). The error occur immediately after I input and click send icon

q4 too heavy for iphone I test llama-2-chat-qK_3_M with metal, on my iPhone 12 pro max work fine

williamchai commented 10 months ago

I think I found what the problem is. If you want, you can write to me at guinmoon@gmail.com, I will invite you to early testing in TestFlight.

Just sent you an email, thanks!

tstanek390 commented 10 months ago

Hi there, I'm experiencing same kind of issue with LLM Farm App. Tried several LLama2 models, including the original one. Im on Macbook Pro M2 16 GB, OS Ventura, using llama.cpp q4_0 quantization. Im able to load a model, and tweak the settings, but in the moment I send first prompt in the chat, app crashes with no additional info, I reported the crash using apple interface. Thx for any help:)

guinmoon commented 10 months ago

Hi there, I'm experiencing same kind of issue with LLM Farm App. Tried several LLama2 models, including the original one. Im on Macbook Pro M2 16 GB, OS Ventura, using llama.cpp q4_0 quantization. Im able to load a model, and tweak the settings, but in the moment I send first prompt in the chat, app crashes with no additional info, I reported the crash using apple interface. Thx for any help:)

Could you send a link to the model?

tstanek390 commented 10 months ago

Ofc, https://huggingface.co/llSourcell/medllama2_7b.

guinmoon commented 10 months ago

Ofc, https://huggingface.co/llSourcell/medllama2_7b.

Did you quantize the model yourself? What version of llama.cpp did you use for quantization?

tstanek390 commented 10 months ago

Yes, I quantized it myself a couple of hours ago, using the latest version of llama.cpp from master brach of its GitHub repository, and "make" command after that.

guinmoon commented 10 months ago

Yes, I quantized it myself a couple of hours ago, using the latest version of llama.cpp from master brach of its GitHub repository, and "make" command after that.

The latest version of llama.cpp has changed the format to gguf. I will add support for it soon, but in the meantime, could you requantize the model with this version? https://github.com/ggerganov/llama.cpp/tree/dadbed99e65252d79f81101a392d0d6497b86caa

tstanek390 commented 10 months ago

I've tried requantize with the version you mentioned, but same issue :/ i guess the problem is in sth different

guinmoon commented 10 months ago

I've tried requantize with the version you mentioned, but same issue :/ i guess the problem is in sth different

There is definitely another problem, I'm trying to understand what it is. But for the current version of llmfarm, you need to use the old ggjtv3 quantization. Have you tried running models from this list? https://github.com/guinmoon/LLMFarm/blob/main/models.md

guinmoon commented 10 months ago

Yes, I quantized it myself a couple of hours ago, using the latest version of llama.cpp from master brach of its GitHub repository, and "make" command after that.

If you have an Intel Mac then there is a problem with metal. Try turning it off.

tstanek390 commented 10 months ago

I have m2 silicone Mac, but haven't used your models yet. I will give it a try ! P.S I wanted to avoid the bloke's models cause its large size and I primarily aim for an iPhone app, not Mac OS.

EDIT Still the same issue with the provided model. Any other suggestions?

guinmoon commented 10 months ago

I have m2 silicone Mac, but haven't used your models yet. I will give it a try ! P.S I wanted to avoid the bloke's models cause its large size and I primarily aim for an iPhone app, not Mac OS.

EDIT Still the same issue with the provided model. Any other suggestions?

Unfortunately, I could not repeat this error on my device, so I can only guess.

guinmoon commented 9 months ago

iPhone 13 Pro iOS 16.3.1 Orca mini 3B downloaded from readme link

Tried different params, use metal or not, always crash. Creating a new chat for each input works, but 2nd input in same chat will crash.

I think I found what the problem was. Tell me, did the update solve the problem?

williamchai commented 9 months ago

did the update solve the problem?

Yes! Tested in 0.5.2 it doesn't crash anymore! Thank you!