Stability-AI / StableLM

StableLM: Stability AI Language Models
Apache License 2.0
15.85k stars 1.04k forks source link

process killed #76

Closed gmankab closed 1 year ago

gmankab commented 1 year ago

reproducing:

1) run code from readme 2) wait for models to download 3) process killed ._.

i got this on my main computer - i7 920, 8gb ram, rx470

and also got it on oracle cloud, 4 core arm cpu, 24 gb ram

image

mcmonkey4eva commented 1 year ago

To run a 7B LLM, you need significantly more RAM than that, and a decent amount of VRAM.

A sudden killed with no further explanation means RAM ran out usually.

gmankab commented 1 year ago

To run a 7B LLM, you need significantly more RAM than that, and a decent amount of VRAM.

A sudden killed with no further explanation means RAM ran out usually.

@mcmonkey4eva, how many ram do i need?

mcmonkey4eva commented 1 year ago

Look at the size of the file you're loading - that's the bare minimum, you need to have more than the filesize available as RAM. The StableLM-7B-Base model release is around 32GiB.

You can save yourself some pain by using preconverted weights, eg this llama.cpp (ggml) conversion of StableLM-7B-Tuned only takes up 5GiB https://huggingface.co/cakewalk/ggml-q4_0-stablelm-tuned-alpha-7b or this GPTQ conversion https://huggingface.co/ldilov/stablelm-tuned-alpha-7b-4bit-128g-descact-sym-true-sequential (note: these are third party upload by random huggingface users, so apply reasonable caution when testing files) with tools capable of loading those versions (eg https://github.com/oobabooga/text-generation-webui)

gmankab commented 1 year ago

thank you