OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

Error: llama runner process has terminated: signal: abort trap #2

Closed ChristianWeyer closed 1 month ago

ChristianWeyer commented 1 month ago

What is the issue?

I followed the instruction (https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) and ran ./ollama run minicpm-v2.5

I got this error: Error: llama runner process has terminated: signal: abort trap

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

Latest git commit (367ec3f)

ChristianWeyer commented 1 month ago

The originally installed Ollama instance was running. Once I stopped this and ran ./ollama serve, it worked.

hhao commented 1 month ago

follow this: https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

ChristianWeyer commented 1 month ago

Erm, yeah... see above ;-). And especially see https://github.com/OpenBMB/ollama/issues/3

Thanks!

ChristianWeyer commented 1 month ago

BTW @hhao - when running ollama run hhao/openbmb-minicpm-llama3-v-2_5:q8_0, I get:

ollama run hhao/openbmb-minicpm-llama3-v-2_5:q8_0
Error: llama runner process has terminated: signal: abort trap

With both, the installed official Ollama, and the Ollama build from this repo.

songlairui commented 1 month ago

I think you are using the official ollama app with the model pulled. (I use it this way too)

According to the README, we should use the self-compiled version ollama, then load the model (Then, I got it running)

However, LM studio can use the image hhao/openbmb-minicpm-llama3-v-2_5 (with https://github.com/sammcj/llamalink linking it to lm-studio), but lm-studio chat don't support image