OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

我下载了模型导入到ollama 但是无法识别图片 #5

Open chaoqunxie opened 1 month ago

hhao commented 1 month ago

参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

chaoqunxie commented 1 month ago

我使用的是ollama官方的windows安装包,加载模型的时候报错 image

chaoqunxie commented 1 month ago

无法识别图片 是因为我只通过openwebui页面导入了ggml-model-F16.gguf ,于是我看了github示例,写了Modelfile,但是遭遇了以上问题

chaoqunxie commented 1 month ago

参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

能否提供F16的tag

chaoqunxie commented 1 month ago

INFO [wmain] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="39764" timestamp=1716910647 total_threads=12 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="53106" tid="39764" timestamp=1716910647 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes GGML_ASSERT: C:\a\ollama\ollama\llm\llama.cpp\examples\llava\clip.cpp:1026: new_clip->has_llava_projector time=2024-05-28T23:37:28.074+08:00 level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 " 这是server.log

chaoqunxie commented 1 month ago

参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

官方ollama 使用你的模型也会报错 image

rhchenxm commented 1 month ago

ollama当前的版本还不支持,需要下载OpenBMB修改后源码进行编译的版本才能支持。 编译方法如下: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5

hhao commented 1 month ago

参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

能否提供F16的tag

已上传

svcvit commented 1 month ago

mac的m芯片编译了一个ollama,可以正常使用了

image

https://www.alipan.com/s/8gxrfmmw75X

chaoqunxie commented 1 month ago

参考 https://ollama.com/hhao/openbmb-minicpm-llama3-v-2_5

能否提供F16的tag

已上传

能否提供遍历的 ollama,我按照你的步骤编译完,还是无法识别图片。感谢

能否编译成windows10 64位的安装包,还是希望推动官方尽快支持面壁者小钢炮

chaoqunxie commented 1 month ago

mac的m芯片编译了一个ollama,可以正常使用了

image

https://www.alipan.com/s/8gxrfmmw75X

效果怎么样 ^_^