OpenBMB / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.
https://ollama.com
MIT License
11 stars 5 forks source link

使用Modelfile进行打包,在run的时候报错 #15

Open Dandi-China opened 2 weeks ago

Dandi-China commented 2 weeks ago

What is the issue?

FROM ./MiniCPM-V-2_5/model/ggml-model-Q4_K_M.gguf FROM ./MiniCPM-V-2_5/mmproj-model-f16.gguf

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" PARAMETER num_keep 4 PARAMETER num_ctx 2048

使用的模型文件是CPU的gguf,原来是出现无法浏览访问外部文件系统中的文件,后来换了一下,执行run报错是Error: llama runner process has terminated: exit status 0xc0000409

OS

No response

GPU

No response

CPU

Intel

Ollama version

No response

kele527 commented 1 week ago

我也遇到了 Error: llama runner process has terminated: signal: aborted (core dumped)