Closed artyomboyko closed 1 month ago
I have the same issue
Same here
I though I provoked the problem, but apparently I might not be the only one.
The model runs on CPU, as the GPU load / memory usage sticks to 0.
Reporting my code / logs
llama-cpp-python version: 0.3.1 llama-cpp-python downloaded via whl from https://abetlen.github.io/llama-cpp-python/whl/cu124
code to load / use Llama model:
def run_model(text: str) -> str:
model_name = 'bartowski/Mistral-Nemo-Instruct-2407-GGUF'
model = Llama.from_pretrained(
model_name,
cache_dir=models_root,
filename='Mistral-Nemo-Instruct-2407-Q6_K_L.gguf',
# verbose=False,
n_gpu_layers=-1,
n_ctx=10*1024,
main_gpu=1,
)
output = model(prompt=f'[INST]{text}[/INST]', echo=True, max_tokens=None)
summary = output['choices'][0]['text']
e_index = summary.find('[/INST]')
summary = summary[e_index+7:]
return summary
nvidia-smi output:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Off | 00000000:04:00.0 Off | N/A |
| 0% 39C P8 1W / 260W | 4MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 4080 ... Off | 00000000:07:00.0 On | N/A |
| 30% 39C P8 11W / 320W | 780MiB / 16376MiB | 3% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
Model initialization log:
llama_model_loader: loaded meta data with 39 key-value pairs and 363 tensors from /home/gfurlan/src/summarizer-ai/models/models--bartowski--Mistral-Nemo-Instruct-2407-GGUF/snapshots/e9cdc9d71317c0911875031d1c22f6d9231b6715/./Mistral-Nemo-Instruct-2407-Q6_K_L.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Mistral Nemo Instruct 2407
llama_model_loader: - kv 3: general.version str = 2407
llama_model_loader: - kv 4: general.finetune str = Instruct
llama_model_loader: - kv 5: general.basename str = Mistral-Nemo
llama_model_loader: - kv 6: general.size_label str = 12B
llama_model_loader: - kv 7: general.license str = apache-2.0
llama_model_loader: - kv 8: general.languages arr[str,9] = ["en", "fr", "de", "es", "it", "pt", ...
llama_model_loader: - kv 9: llama.block_count u32 = 40
llama_model_loader: - kv 10: llama.context_length u32 = 1024000
llama_model_loader: - kv 11: llama.embedding_length u32 = 5120
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: llama.attention.key_length u32 = 128
llama_model_loader: - kv 18: llama.attention.value_length u32 = 128
llama_model_loader: - kv 19: general.file_type u32 = 18
llama_model_loader: - kv 20: llama.vocab_size u32 = 131072
llama_model_loader: - kv 21: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 22: tokenizer.ggml.add_space_prefix bool = false
llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 24: tokenizer.ggml.pre str = tekken
llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,131072] = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,131072] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
Exception ignored on calling ctypes callback function: <function llama_log_callback at 0x7f786867f920>
Traceback (most recent call last):
File "/home/gfurlan/.local/share/virtualenvs/summarizer-ai-8ZbftlU3/lib64/python3.12/site-packages/llama_cpp/_logger.py", line 39, in llama_log_callback
print(text.decode("utf-8"), end="", flush=True, file=sys.stderr)
^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc4 in position 128: invalid continuation byte
llama_model_loader: - kv 28: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 30: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 32: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if messages[0]['role'] == 'system...
llama_model_loader: - kv 34: general.quantization_version u32 = 2
llama_model_loader: - kv 35: quantize.imatrix.file str = /models_out/Mistral-Nemo-Instruct-240...
llama_model_loader: - kv 36: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 37: quantize.imatrix.entries_count i32 = 280
llama_model_loader: - kv 38: quantize.imatrix.chunks_count i32 = 128
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q8_0: 2 tensors
llama_model_loader: - type q6_K: 280 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 1000
llm_load_vocab: token to piece cache size = 0.8498 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 131072
llm_load_print_meta: n_merges = 269443
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 1024000
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 1024000
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = Q6_K
llm_load_print_meta: model params = 12.25 B
llm_load_print_meta: model size = 9.66 GiB (6.78 BPW)
llm_load_print_meta: general.name = Mistral Nemo Instruct 2407
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 1196 'Ä'
llm_load_print_meta: EOG token = 2 '</s>'
llm_load_print_meta: max token length = 150
llm_load_tensors: ggml ctx size = 0.17 MiB
llm_load_tensors: CPU buffer size = 9892.83 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 10240
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 1600.00 MiB
llama_new_context_with_model: KV self size = 1600.00 MiB, K (f16): 800.00 MiB, V (f16): 800.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
llama_new_context_with_model: CPU compute buffer size = 696.01 MiB
llama_new_context_with_model: graph nodes = 1286
llama_new_context_with_model: graph splits = 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
Model metadata: {'quantize.imatrix.dataset': '/training_dir/calibration_datav3.txt', 'general.quantization_version': '2', 'tokenizer.chat_template': "{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set loop_messages = messages %}\n{%- endif %}\n\n{{- bos_token }}\n{%- for message in loop_messages %}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}\n {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}\n {%- endif %}\n {%- if message['role'] == 'user' %}\n {%- if loop.last and system_message is defined %}\n {{- '[INST] ' + system_message + '\\n\\n' + message['content'] + '[/INST]' }}\n {%- else %}\n {{- '[INST] ' + message['content'] + '[/INST]' }}\n {%- endif %}\n {%- elif message['role'] == 'assistant' %}\n {{- ' ' + message['content'] + eos_token}}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n{%- endfor %}\n", 'llama.embedding_length': '5120', 'llama.feed_forward_length': '14336', 'general.license': 'apache-2.0', 'llama.attention.value_length': '128', 'tokenizer.ggml.add_bos_token': 'true', 'general.size_label': '12B', 'general.type': 'model', 'general.version': '2407', 'quantize.imatrix.chunks_count': '128', 'llama.context_length': '1024000', 'general.name': 'Mistral Nemo Instruct 2407', 'tokenizer.ggml.bos_token_id': '1', 'general.basename': 'Mistral-Nemo', 'quantize.imatrix.entries_count': '280', 'llama.attention.head_count_kv': '8', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.attention.head_count': '32', 'llama.block_count': '40', 'llama.attention.key_length': '128', 'general.finetune': 'Instruct', 'general.file_type': '18', 'tokenizer.ggml.pre': 'tekken', 'llama.vocab_size': '131072', 'quantize.imatrix.file': '/models_out/Mistral-Nemo-Instruct-2407-GGUF/Mistral-Nemo-Instruct-2407.imatrix', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.add_space_prefix': 'false', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.model': 'gpt2', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'tokenizer.ggml.eos_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0'}
Available chat formats from metadata: chat_template.default
Using gguf chat template: {%- if messages[0]['role'] == 'system' %}
{%- set system_message = messages[0]['content'] %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set loop_messages = messages %}
{%- endif %}
{{- bos_token }}
{%- for message in loop_messages %}
{%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
{{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}
{%- endif %}
{%- if message['role'] == 'user' %}
{%- if loop.last and system_message is defined %}
{{- '[INST] ' + system_message + '\n\n' + message['content'] + '[/INST]' }}
{%- else %}
{{- '[INST] ' + message['content'] + '[/INST]' }}
{%- endif %}
{%- elif message['role'] == 'assistant' %}
{{- ' ' + message['content'] + eos_token}}
{%- else %}
{{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}
{%- endif %}
{%- endfor %}
Using chat eos_token: </s>
Using chat bos_token: <s>
@alpopesc @vador31 @elegos
Looks like I've solved the problem. Is everyone interested using Windows + WSL2?
That helped me solve the problem.
For Windows 11 WSL2 Ubuntu 24.04 LTS (Yes, now it doesn't cause Windows to freeze):
Create clear WSL 2. Update the repository:
sudo apt-get -y update && sudo apt-get -y upgrade
Install latest Nvidia driver on Windows.
Install latest CUDA Toolkit in WSL2. I chose the following items: Linux
-> x86_64
-> WSL-Ubuntu
-> 2.0
-> deb (network)
. I follow the installation instructions from the official website:
$ wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
$ sudo dpkg -i cuda-keyring_1.1-1_all.deb
$ sudo apt-get update
$ sudo apt-get -y install cuda-toolkit-12-6
Installed additional packages:
$ sudo apt-get -y install cmake python3-pip
Add paths to CUDA libraries:
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
Build llama.cpp:
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
Install llama-cpp-python:
CMAKE_ARGS="-DGGML_CUDA=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]' --break-system-packages
Note: --break-system-packages
is required if you get a warning about replacing Python system packages. Otherwise nothing will be installed.
Edit: The above mentioned built from source was taking quite a time. So did a little digging and found the below solution in some other thread.
set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && pip install --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
Note: The above mentioned command will download and install a pre built wheel. Only supports 12.1 - 12.5.
@AleefBilal Do you have Ubuntu 22.04 as your primary operating system?
@blademoon No, it's Ubuntu 20
@AleefBilal I used the latest version of Cuda available. Have you already tried repeating the suggested solution?
@artyomboyko I'm using the solution that I've suggested
set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && pip install --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
Quiet extensively on different projects and it is running fine for now.
On Ubuntu 22 as well.
@AleefBilal OK
@artyomboyko I'm using the solution that I've suggested
set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && pip install --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
Quiet extensively on different projects and it is running fine for now. On Ubuntu 22 as well.
This works for me, thanks so much!
@artyomboyko I'm using the solution that I've suggested
set CMAKE_ARGS="-DLLAMA_CUBLAS=on" && set FORCE_CMAKE=1 && pip install --no-cache-dir llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
Quiet extensively on different projects and it is running fine for now. On Ubuntu 22 as well.
Bro I wasted whole 2 days trying everything and this worked god bless you!
@lukaLLM You are not the only one who wasted days. Don't thank me, thank the open source community. Hope you be a part of it as well. :)
@lukaLLM You are not the only one who wasted days. Don't thank me, thank the open source community. Hope you be a part of it as well. :)
Yeah plan to actually do it when I get some experience!
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Run Gemma-27b-it on GPU. test2.py.txt
Current Behavior
Run model on CPU instead:
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
Nvidia driver and tools:
$ uname -a
$ python3 --version
$ make --version
$ g++ --version
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
pip install -U transformers bitsandbytes gradio accelerate
pip install llama-cpp-python
python3 test2.py
Note: Many issues seem to be regarding functional or performance issues / differences with
llama.cpp
. In these cases we need to confirm that you're comparing against the version ofllama.cpp
that was built with your python package, and which parameters you're passing to the context.Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython -m pip install .
cd ./vendor/llama.cpp
cmake
llama.cpp./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cppFailure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.
Example environment info: