cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.09k stars 1.43k forks source link

llama_model_load: loading model issue in Docker #454

Open nationallokmatparty opened 1 year ago

nationallokmatparty commented 1 year ago

llama_model_load: loading model from 'models/7B/ggml-model-q4_0.bin' - please wait ...

llama_model_load: invalid model file 'models/7B/ggml-model-q4_0.bin' (bad magic)

main: failed to load model from 'models/7B/ggml-model-q4_0.bin'

Can any one suggest please how to solve that issue

haohsiangc commented 1 year ago

I'm using MacOS 13.4 encounter this issue too.

main: seed = 1686041591
llama_model_load: loading model from 'models/13B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file 'models/13B/ggml-model-q4_0.bin' (bad magic)
main: failed to load model from 'models/13B/ggml-model-q4_0.bin'
bash-3.2$ exit
akapulka commented 1 year ago

Same thing on windows

pedrooct commented 1 year ago

Same on ubuntu

hayatbehlim commented 1 year ago

Facing same issue in docker container running in mac m1

mirek190 commented 1 year ago

that project is dead .... use llama.cpp or koboldcpp