Closed ChristianWeyer closed 1 month ago
The originally installed Ollama instance was running.
Once I stopped this and ran ./ollama serve
, it worked.
follow this: https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5
Erm, yeah... see above ;-). And especially see https://github.com/OpenBMB/ollama/issues/3
Thanks!
BTW @hhao - when running ollama run hhao/openbmb-minicpm-llama3-v-2_5:q8_0
, I get:
ollama run hhao/openbmb-minicpm-llama3-v-2_5:q8_0
Error: llama runner process has terminated: signal: abort trap
With both, the installed official Ollama, and the Ollama build from this repo.
I think you are using the official ollama app with the model pulled. (I use it this way too)
According to the README, we should use the self-compiled version ollama, then load the model (Then, I got it running)
However, LM studio can use the image hhao/openbmb-minicpm-llama3-v-2_5
(with https://github.com/sammcj/llamalink
linking it to lm-studio), but lm-studio chat don't support image
What is the issue?
I followed the instruction (https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) and ran
./ollama run minicpm-v2.5
I got this error:
Error: llama runner process has terminated: signal: abort trap
OS
macOS
GPU
Apple
CPU
Apple
Ollama version
Latest git commit (367ec3f)