c0sogi / llama-api

An OpenAI-like LLaMA inference API
MIT License
111 stars 9 forks source link

warning: failed to mlock 245760-byte buffer (after previously locking 0 bytes): Cannot allocate memory llm_load_tensors: mem required = 46494.72 MB (+ 1280.00 MB per state) #13

Closed Dougie777 closed 1 year ago

Dougie777 commented 1 year ago

I am getting this memory problem trying to run in llama-api. The same exact model works perfect in oobabooga

warning: failed to mlock 245760-byte buffer (after previously locking 0 bytes): Cannot allocate memory llm_load_tensors: mem required = 46494.72 MB (+ 1280.00 MB per state)

This is my model_definition:

llama2_70b_Q5_gguf = LlamaCppModel( model_path="llama-2-70b-chat.Q5_K_M.gguf", # manual download max_total_tokens=4096 )

Llama2_70b_q5_gguf - llama-api

llama_model_loader: - kv 18: general.quantization_version u32 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q5_K: 481 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_print_meta: format = GGUF V2 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_ctx = 4096 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: freq_base = 10000.0 llm_load_print_meta: freq_scale = 1 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = mostly Q5_K - Medium llm_load_print_meta: model size = 68.98 B llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.23 MB warning: failed to mlock 245760-byte buffer (after previously locking 0 bytes): Cannot allocate memory llm_load_tensors: mem required = 46494.72 MB (+ 1280.00 MB per state)

Working from oobabooga Llama2_70b_q5_gguf

llama_model_loader: - kv 18: general.quantization_version u32 llama_model_loader: - type f32: 161 tensors llama_model_loader: - type q5_K: 481 tensors llama_model_loader: - type q6_K: 81 tensors llm_load_print_meta: format = GGUF V2 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_ctx = 16384 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: f_norm_eps = 1.0e-05 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: n_ff = 28672 llm_load_print_meta: freq_base = 10000.0 llm_load_print_meta: freq_scale = 1 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = mostly Q5_K - Medium llm_load_print_meta: model size = 68.98 B llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.23 MB llm_load_tensors: mem required = 46494.72 MB (+ 5120.00 MB per state) .................................................................................................... llama_new_context_with_model: kv self size = 5120.00 MB llama_new_context_with_model: compute buffer total size = 2097.47 MB

c0sogi commented 1 year ago

By default, the mlock parameter defaults to True in this application, but False in oobabuga, which is why I think you're seeing different results.

The reason why mlock fails is probably because mlock is limited on your system. If you are on Linux, consider using the 'ulimit -l unlimited' option. or, pass 'use_mlock=False' to your LlamaCppModel. Actually, you shouldn't have any problems using it without mlock.

Dougie777 commented 1 year ago

I already set ulimit to unlimited. I will try use_mlock=False. Thanks!

Dougie777 commented 1 year ago

That worked. I no longer get the error. THANKS!

delta-whiplash commented 9 months ago

how can I use_mlock=False on docker setup please