abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.7k stars 925 forks source link

(llama-cpp-python v0.2.57) RuntimeError: Failed to get embeddings from sequence pooling type is not set #1288

Open Fuehnix opened 5 months ago

Fuehnix commented 5 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python to do. I tried running a simple hello world embedding query to make sure llama-cpp-python was working after doing a clean install of CentOS 9 with CUDA, Python 3.11.8, VSCode and the works. Code:

 import argparse

from llama_cpp import Llama

llm = Llama(model_path="/home/jfuehne/Desktop/AI/Code/models/llama-2-13b-chat.Q5_K_M.gguf", embedding=True)

print(llm.create_embedding("Hello world!"))

Expected output (this is the output given for this code when I downgrade the llama-cpp-python package to 0.2.55):

ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from [../llama-2-13b-chat.Q5_K_M.gguf](https://file+.vscode-resource.vscode-cdn.net/../models/llama-2-13b-chat.Q5_K_M.gguf) (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 5120
llama_model_loader: - kv   4:                          llama.block_count u32              = 40
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 13824
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 40
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 40
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 17
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
...
llama_print_timings:      sample time =       0.00 ms [/](https://file+.vscode-resource.vscode-cdn.net/)     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings: prompt eval time =     386.32 ms [/](https://file+.vscode-resource.vscode-cdn.net/)     4 tokens (   96.58 ms per token,    10.35 tokens per second)
llama_print_timings:        eval time =       0.00 ms [/](https://file+.vscode-resource.vscode-cdn.net/)     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time =     386.32 ms [/](https://file+.vscode-resource.vscode-cdn.net/)     5 tokens
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?b1ca8dda-082b-43df-886e-52deefa61e39) or open in a [text editor](command:workbench.action.openLargeOutput?b1ca8dda-082b-43df-886e-52deefa61e39). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
{'object': 'list', 'data': [{'object': 'embedding', 'embedding': [0.01151881321297587, ..., -0.003823969292499485], 'index': 0}], 'model': '../llama-2-13b-chat.Q5_K_M.gguf', 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}

Current Behavior

Please provide a detailed written description of what llama-cpp-python did, instead. When using 0.2.57 of llama-cpp-python (the version autoinstalled by pip):

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[6], line 8
      3 from llama_cpp import Llama
      6 llm = Llama(model_path="/home/jfuehne/Desktop/AI/Code/models/llama-2-13b-chat.Q5_K_M.gguf", embedding=True)
----> 8 print(llm.create_embedding("Hello world!"))

File ~/Desktop/AI/.venv/lib/python3.11/site-packages/llama_cpp/llama.py:752, in Llama.create_embedding(self, input, model)
    750 embeds: List[List[float]]
    751 total_tokens: int
--> 752 embeds, total_tokens = self.embed(input, return_count=True)  # type: ignore
    754 # convert to CreateEmbeddingResponse
    755 data: List[Embedding] = [
    756     {
    757         "object": "embedding",
   (...)
    761     for idx, emb in enumerate(embeds)
    762 ]

File ~/Desktop/AI/.venv/lib/python3.11/site-packages/llama_cpp/llama.py:863, in Llama.embed(self, input, normalize, truncate, return_count)
    860     p_batch += 1
    862 # hanlde last batch
--> 863 decode_batch(p_batch)
    865 if self.verbose:
    866     llama_cpp.llama_print_timings(self._ctx.ctx)
...
--> 824     raise RuntimeError("Failed to get embeddings from sequence pooling type is not set")
    825 embedding: List[float] = ptr[:n_embd]
    826 if normalize:

RuntimeError: Failed to get embeddings from sequence pooling type is not set

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
    CPU family:          6
    Model:               158
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            13
    CPU(s) scaling MHz:  16%
    CPU max MHz:         5000.0000
    CPU min MHz:         800.0000
    BogoMIPS:            7200.00
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
                          ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonsto
                         p_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid
                          sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpu
                         id_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust b
                         mi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm
                          ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization features: 
  Virtualization:        VT-x
Caches (sum of all):     
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    2 MiB (8 instances)
  L3:                    16 MiB (1 instance)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:         
  Gather data sampling:  Mitigation; Microcode
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Mitigation; Clear CPU buffers; SMT vulnerable
  Retbleed:              Mitigation; Enhanced IBRS
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
  Srbds:                 Mitigation; Microcode
  Tsx async abort:       Mitigation; TSX disabled
$ uname -a
Linux localhost.localdomain 5.14.0-430.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 14 17:54:49 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
$ python3.11 --version
Python 3.11.8
$ make --version
GNU Make 4.3
Built for x86_64-redhat-linux-gnu
Copyright (C) 1988-2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ g++ --version
g++ (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

1. install python 3.11.8, with CUDA 12.4 and drivers. sudo install all the required backend modules for python such as:

sudo dnf install gcc make libffi-devel openssl-devel bzip2-devel sqlite-devel readline-devel zlib-devel xz-devel git wget curl python3-venv

2. Set up a venv in vscode and install required packages (in my scenario, I believe I may have initially allowed llama-cpp-python to install without specifying CMAKE args, not sure if this is the root cause, but I believe this should have only resulted in me running on CPU based default and later reinstalling, right?)

  1. Run the provided HelloWorld embedding query with llama_cpp Code:

    
    import argparse

from llama_cpp import Llama

llm = Llama(model_path="/home/jfuehne/Desktop/AI/Code/models/llama-2-13b-chat.Q5_K_M.gguf", embedding=True)

print(llm.create_embedding("Hello world!"))

di-rse commented 5 months ago

I'm getting the same issue, so would be good to know if you found a solution.

silvioimbo commented 5 months ago

What worked for me was just to downgrade to llama-cpp-python==0.2.47

iamlemec commented 5 months ago

Yeah, right now we don't support getting token level embeddings. So generative models like llama-2 that lack pooling layers won't work.

Are you looking for token level embeddings or sequence level embeddings? If the latter, I would use an embedding model like BAAI/bge-*. This is a more typical approach.

It might actually be a decent idea to just return token level embeddings when sequence level aren't available.

lone17 commented 5 months ago

Should be related to #1269, 0.2.55 also still works for me.

Fuehnix commented 5 months ago

I'm getting the same issue, so would be good to know if you found a solution.

I ended up using 0.2.55, and it seems others reached the same conclusion. Later, I ended up switching off llama.cpp for the embedding part, but before I had it working with 0.2.55.

Yeah, right now we don't support getting token level embeddings. So generative models like llama-2 that lack pooling layers won't work.

Are you looking for token level embeddings or sequence level embeddings? If the latter, I would use an embedding model like BAAI/bge-*. This is a more typical approach.

It might actually be a decent idea to just return token level embeddings when sequence level aren't available.

I guess I was looking for sequence level embeddings? I was naively using llama2 for embeddings just to see if things worked, but I wasn't aware of any low level problems with doing that. I've since switched to mpnet from HuggingFaceEmbeddings in Langchain for much better quality results (while using llama-cpp-python for inference).

abetlen commented 5 months ago

@Fuehnix sorry about the trouble, working on a fix to just enable the older behaviour by default in #1272 .

vultj commented 5 months ago

Also running into this issue. Have tried all the way up to v0.2.61, seems like only v0.2.55 is working.

penguindark commented 4 months ago

same error with version 0.2.60

grhone commented 4 months ago

Still an issue on 0.2.63 and 0.2.64.

r3v1 commented 3 months ago

Still an issue on 0.2.75

iamlemec commented 3 months ago

@r3v1 Is it still raising an error, or is it just that it's returning token level embeddings as a list of lists? Generative models like these don't do pooling intrinsically in llama.cpp, and in fact it's not really recommended to use them for embedding purposes. But if you do need pooled embeddings, you'll have to do it manually from the token level embeddings.

r3v1 commented 3 months ago

What if I would like to store embeddings in a vector store through Langchain? It shoul return single dimension vector.

In recent llama-cpp-python versions, when pooling_type=LLAMA_POOLING_TYPE_MEAN throws:

...
Guessed chat format: llama-3
GGML_ASSERT: /home/david/git/llama-cpp-python/vendor/llama.cpp/llama.cpp:11171: lctx.inp_mean
ptrace: Operación no permitida.
No stack.
The program is not being run.
[1]    29907 IOT instruction (core dumped)

The MWE:

import llama_cpp
from llama_cpp import LLAMA_POOLING_TYPE_MEAN

llm = llama_cpp.Llama(
    model_path="meta-llama-3-8b-instruct.Q4_K_M.gguf",
    embedding=True,
    pooling_type=LLAMA_POOLING_TYPE_MEAN,  # Crashes
)
llm.create_embedding(["Hello world"])

Otherwise, without specifying pooling_type, returns token level embeddings.

However, version 0.2.55 works as wanted, just sentence level embedding.

iamlemec commented 3 months ago

Yeah, the langchain interop code is unforunately broken right now for getting embeddings from generative models. For it to work in this case, we'd need to implement manual pooling somewhere. But if you're doing anything like retrieval or classification, you can get much better results with smaller embedding models like bge-*/jina/nomic that work as expected here. Checkout the MTEB leaderboard on Huggingface.

I think 0.2.55 should work fine in this case, though I suspect it may fail or crash if you try to do it with more than one sequence per call to create_embedding.

r3v1 commented 3 months ago

Sure, I will take a look. I was trying to do all steps of the RAG with unique model for some experimentation

vultj commented 3 months ago

If you set

pooling_type=llama_cpp.LLAMA_POOLING_TYPE_NONE,

it should work fine. I havent tested builds later than 0.2.68 but that one seems to work fine.

I believe LLAMA_POOLING_TYPE_MEAN crashes on older models that lack certain data, so likely cannot use that at all.