Future-House / paper-qa

High accuracy RAG for answering questions from scientific documents with citations
Apache License 2.0
6.35k stars 601 forks source link

Endless loop of llama_print_timings #99

Closed ErfolgreichCharismatisch closed 1 month ago

ErfolgreichCharismatisch commented 1 year ago

I have the following code for qa with llamacpp and this is what I get, it keeps outputting llama_print_timings, what to make of that?

My code is

from paperqa import Docs
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.embeddings import LlamaCppEmbeddings

callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])

pfad = r'gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin'
llm = LlamaCpp(
   model_path=pfad, callback_manager=callback_manager, verbose=False, n_ctx=2048
)

embeddings = LlamaCppEmbeddings(model_path=pfad)

docs = Docs(llm=llm, embeddings=embeddings)

keyword_search = 'definition problem solving'
folder_path = r"pdf\path"

#fill docs with paths to pdf files

try:
    docs.add(path, chunk_chars=500)
except ValueError as e:
    print('Lesefehler:', path, e)

answer = docs.query("What is complex problem solving?")
print(answer)

This is the output

python pdfbefragen.py

llama.cpp: loading model from C:\Python\Modelle\gpt4-x-alpaca-13b-native-ggml-model-q4_0\gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin
llama_model_load_internal: format     = ggjt v1 (latest)
llama_model_load_internal: n_vocab    = 32001
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =  73.73 KB
llama_model_load_internal: mem required  = 9807.47 MB (+ 1608.00 MB per state)
llama_init_from_file: kv self size  = 1600.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
llama.cpp: loading model from C:\Python\Modelle\gpt4-x-alpaca-13b-native-ggml-model-q4_0\gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin
llama_model_load_internal: format     = ggjt v1 (latest)
llama_model_load_internal: n_vocab    = 32001
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =  73.73 KB
llama_model_load_internal: mem required  = 9807.47 MB (+ 3216.00 MB per state)
llama_init_from_file: kv self size  =  800.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
 Sell, Robert, and Ralf Schimweg. Problemen losen. In komplexen Zusammenhangen denken. 5., neubearbeitete und erweiterte Auflage. Mit 86 Abbildungen und 19 Tabellen. Springer, Dr.-lng. Robert Sell, and Dr.-lng. Ralf Schimweg, MA&T Sell & Partner GmbH, KrantzstraBe 7, 52070 Aachen, 2023, p. [book page numbers].
llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 17120.07 ms /   206 tokens (   83.11 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 17121.84 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 17707.49 ms /   192 tokens (   92.23 ms per token)
llama_print_timings:        eval time =   259.75 ms /     1 runs   (  259.75 ms per run)
llama_print_timings:       total time = 17968.70 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 21497.08 ms /   231 tokens (   93.06 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 21498.67 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 16721.38 ms /   158 tokens (  105.83 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 16722.62 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 21038.20 ms /   167 tokens (  125.98 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 21039.57 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 22817.22 ms /   158 tokens (  144.41 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 22818.76 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 25283.89 ms /   175 tokens (  144.48 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 25285.61 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 20272.31 ms /   144 tokens (  140.78 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 20273.81 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 13977.27 ms /   134 tokens (  104.31 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 13978.52 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 15541.65 ms /   160 tokens (   97.14 ms per token)
llama_print_timings:        eval time =   257.92 ms /     1 runs   (  257.92 ms per run)
llama_print_timings:       total time = 15800.82 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 15879.90 ms /   160 tokens (   99.25 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 15881.74 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 14016.32 ms /   146 tokens (   96.00 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 14017.61 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 15166.51 ms /   160 tokens (   94.79 ms per token)
llama_print_timings:        eval time =   261.40 ms /     1 runs   (  261.40 ms per run)
llama_print_timings:       total time = 15429.30 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 15465.17 ms /   155 tokens (   99.78 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 15466.43 ms

llama_print_timings:        load time =   838.40 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 19460.73 ms /   164 tokens (  118.66 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 19462.11 ms
amn-max commented 1 year ago

Facing the same loop. Any idea why?

ErfolgreichCharismatisch commented 1 year ago

What I found:

  1. Loads model twice.
  2. Parrots first page.
  3. Finds no answer and outputs empty string, tokens change.

I find it odd that everyone pays and uses ChatGPT and basically nobody uses the local models.

Before we continue with this, let's see this video(starts at the right time, just play): https://youtu.be/ywT-5yKDtDg?t=2717

jarciniegas20 commented 1 year ago

I found a very good alternative as I was also getting the endless loop of doom, not to mention how unbelievably slow it was. Using sentence transformers with HuggingFaceEmbeddings is incredibly fast and works consistently for me. This is, of course, instead of LlamaCppEmbeddings, which seemed to be the culprit.

from langchain.embeddings import HuggingFaceEmbeddings embeddings_model = "sentence-transformers/all-MiniLM-L6-v2" embeddings = HuggingFaceEmbeddings(model_name=embeddings_model)

bhamadicharef commented 4 months ago

LlamaCppEmbeddings and HuggingFaceEmbeddings are depreciated now ... any updated example ?

jamesbraza commented 1 month ago

Hello everyone, we have just released version 5, which completely outsources all LLM management to https://github.com/BerriAI/litellm.

If your issue persists, please reopen a new issue using paper-qa>=5