antimatter15 / alpaca.cpp

Locally run an Instruction-Tuned Chat-Style LLM
MIT License
10.25k stars 912 forks source link

Segmentation fault (core image saved) #152

Open archiwed opened 1 year ago

archiwed commented 1 year ago

Whenever I try to execute the code on my machine I get the error: "Segmentation failure (core image written)" I know that usually this failure indicates that the program tried to access an area of memory that was not allocated or is not accessible.

I have tried debugging the code on my machine but I still get stuck with this problem.

I am using Arch Linux to run.

(base) [andre@archlinux alpaca.cpp]$ ls build chat.cpp convert-pth-to-ggml.py ggml.c ggml.o Makefile quantize.sh screencast.gif utils.h chat CMakeLists.txt ggml-alpaca-7b-q4.bin ggml.h LICENSE quantize.cpp README.md utils.cpp utils.o

(base) [andre@archlinux alpaca.cpp]$ ./chat main: seed = 1679698347 llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... llama_model_load: ggml ctx size = 6065.34 MB Falha de segmentação (imagem do núcleo gravada)

ahmetkca commented 1 year ago

Please review the following PR that I opened yesterday: #142. I encountered the same problem, delved into it, and investigated. Firstly, how much RAM do you have? Additionally, how much RAM was available before loading the entire weight set?

The most likely root cause of the problem is that you either do not have a sufficient amount of RAM or you have at least the minimum required RAM. However, the available RAM prior to loading the weight set was insufficient due to various reasons, such as background tasks, etc.

archiwed commented 1 year ago

I have 16gb of RAM in total. There was 12gb of RAm at the time of running the model.

And from what I investigated, the probrability of the problem being related to my memory is certain. (I haven't tested it yet) But I'm thinking of changing the language model to a lighter one to see if it solves the problem.

I don't have much skill in C/C++ to investigate deeply the cause, like ggml exploitation.

I read your PR. Thanks for your feedback!