karpathy / llama2.c

Inference Llama 2 in one file of pure C
MIT License
17.06k stars 2.01k forks source link

unable to convert llama2 7b model #288

Open edisondeng opened 1 year ago

edisondeng commented 1 year ago

hi, I am trying to convert the llama2 7b model by below script. python export_meta_llama_bin.py ~/projects/75_NLP/llama-main/llama-2-7b llama2_7b.bin it always popup "killed" message.

My hardward is i7-12700H 16G RAM & NVIDIA GeForce RTX 3060 6G VRAM running on ubuntu 22.04

Any idea why this happen?

tks

RahulSChand commented 1 year ago

Most likely because you don't have enough RAM. llama-2-7b is around 13GB. The export file loads the whole model at once into RAM. Usually 4-5 GB RAM is already taken up by existing processes so 16GB RAM isn't enough to open llama-2-7b. To confirm this run htop command & see RAM usage while running export script

madroidmaq commented 1 year ago

@edisondeng I have encountered the same problem, switch to a machine with more memory, it is ok, the suggestion is more than 20 GB.

edisondeng commented 1 year ago

tks to all. Yes, it is due to insufficient memory.

On Thu, Aug 17, 2023 at 3:36 PM Madroid Ma @.***> wrote:

@edisondeng https://github.com/edisondeng I have encountered the same problem, switch to a machine with more memory, it is ok, the suggestion is more than 20 GB.

— Reply to this email directly, view it on GitHub https://github.com/karpathy/llama2.c/issues/288#issuecomment-1681788002, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZRV5S6ZKG5IVDD5L7J66TXVXCXRANCNFSM6AAAAAA3PGTDNQ . You are receiving this because you were mentioned.Message ID: @.***>

treeform commented 12 months ago

I used the old script and it worked: https://github.com/karpathy/llama2.c/issues/341#issuecomment-1694503671

mcognetta commented 7 months ago

I also was not able to convert llama2-7B despite having 32GB ram (closed everything out, and htop reported only 2GB being used by other processes, and swp was totally clear).

The old script worked for me as well.

adi-lb-phoenix commented 2 months ago

I have been getting an error when trying to convert Meta-Llama-3-8B-Instruct.Q4_0.gguf to .bin format `python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inferences/llama.cpp/models/meta

Traceback (most recent call last): File "/home/####t/llm_inferences/llama2.c/export.py", line 559, in model = load_meta_model(args.meta_llama) File "/home/####/llm_inferences/llama2.c/export.py", line 373, in load_meta_model with open(params_path) as f: FileNotFoundError: [Errno 2] No such file or directory: '/home/####t/llm_inferences/llama.cpp/models/meta/params.json'`