PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.73k stars 2.2k forks source link

ggml_new_tensor_impl: not enough space in the scratch memory pool #354

Open Gokulancv10 opened 1 year ago

Gokulancv10 commented 1 year ago

I'm facing intermittent Issue with model_id: TheBloke/Llama-2-13B-chat-GGML, model_basename: llama-2-13b-chat.ggmlv3.q4_0.bi and model_id: TheBloke/Llama-2-7B-Chat-GGML, model_basename: llama-2-7b-chat.ggmlv3.q4_0.bin.

Full Trace back Error:

ggml_new_tensor_impl: not enough space in the scratch memory pool (needed 557051040, available 536870912)
ERROR:run_localGPT_API:Exception on /api/prompt_route [POST]
Traceback (most recent call last):
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\flask\app.py", line 2190, in wsgi_app
    response = self.full_dispatch_request()
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\flask\app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "D:\LocalGPT_PromptEngineering\localGPT\run_localGPT_API.py", line 180, in prompt_route
    res = QA(query_prefix)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 120, in _call
    answer = self.combine_documents_chain.run(
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 239, in run
    return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\combine_documents\base.py", line 84, in _call
    output, extra_return_dict = self.combine_docs(
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\combine_documents\stuff.py", line 87, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\chains\llm.py", line 79, in generate
    return self.llm.generate_prompt(
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\base.py", line 191, in generate
    raise e
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\base.py", line 436, in _generate
    self._call(prompt, stop=stop, run_manager=run_manager)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\llamacpp.py", line 225, in _call
    for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\langchain\llms\llamacpp.py", line 274, in stream
    for chunk in result:
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\llama_cpp\llama.py", line 860, in _create_completion
    for token in self.generate(
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\llama_cpp\llama.py", line 688, in generate
    self.eval(tokens)
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\llama_cpp\llama.py", line 418, in eval
    return_code = llama_cpp.llama_eval(
  File "D:\LocalGPT_PromptEngineering\venv\lib\site-packages\llama_cpp\llama_cpp.py", line 561, in llama_eval
    return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
OSError: exception: access violation writing 0x0000000000000050

Other Details:

lyx102 commented 1 year ago

me too

Photon48 commented 1 year ago

Me three! I think its some ort of RAM problem. My RAM jumps to 85% usage when I "run_localGPT.py" and stays there for the whole time until I exit out of the query.