PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.03k stars 2.23k forks source link

Issue while trying to ask the model a question. #348

Open sauravm8 opened 1 year ago

sauravm8 commented 1 year ago

Loading the quanitzed TheBloke/Llama-2-70B-chat-GPTQ or TheBloke/Llama-2-70B-GPTQ model across multiple GPUs. The model is getting loaded, but the query is throwing an error

ValueError: not enough values to unpack (expected 3, got 2)
2023-08-07 09:56:51,730 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: /home/ubuntu/saurav/localGPT/DB

Full Stacktrace:

Enter a query: Hi
Traceback (most recent call last):
  File "/home/ubuntu/saurav/localGPT/run_localGPT.py", line 278, in <module>
    main()
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/ubuntu/saurav/localGPT/run_localGPT.py", line 256, in main
    res = qa(query)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 120, in _call
    answer = self.combine_documents_chain.run(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 239, in run
    return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
    output, extra_return_dict = self.combine_docs(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 87, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate
    return self.llm.generate_prompt(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/llms/base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/llms/base.py", line 191, in generate
    raise e
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/llms/base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/llms/base.py", line 436, in _generate
    self._call(prompt, stop=stop, run_manager=run_manager)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
    response = self.pipeline(prompt)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 204, in __call__
    return super().__call__(text_inputs, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1128, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1135, in run_single
    model_outputs = self.forward(model_inputs, **forward_params)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1034, in forward
    model_outputs = self._forward(model_inputs, **forward_params)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 265, in _forward
    generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 423, in generate
    return self.model.generate(**kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1564, in generate
    return self.greedy_search(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/generation/utils.py", line 2457, in greedy_search
    outputs = self(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 810, in forward
    outputs = self.model(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 698, in forward
    layer_outputs = decoder_layer(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 413, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/ubuntu/saurav/env2/lib/python3.10/site-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 54, in forward
    query_states, key_states, value_states = torch.split(qkv_states, self.hidden_size, dim=2)
ValueError: not enough values to unpack (expected 3, got 2)
2023-08-07 09:56:51,730 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: /home/ubuntu/saurav/localGPT/DB

GPU config A10x2. On loading a smaller model, which is fit in 1 GPU, it can answer questions.

How to utilize more than 1 GPU to use larger models?

mlaszko commented 1 year ago

I have the same issue with Llama-2-70B-chat-GPTQ model. Have you solved it? Model TheBloke/guanaco-65B-GPTQ works on multiple GPUs without errors.

sauravm8 commented 1 year ago

Yes, use the latest template provided by theBloke, with device='auto'

On Wed, 13 Sept 2023, 12:32 am mlaszko, @.***> wrote:

I have the same issue with Llama-2-70B-chat-GPTQ model. Have you solved it? Model TheBloke/guanaco-65B-GPTQ works on multiple GPUs without errors.

— Reply to this email directly, view it on GitHub https://github.com/PromtEngineer/localGPT/issues/348#issuecomment-1716264636, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE46VG4THA6UOYQGQYOAXUTX2CWTLANCNFSM6AAAAAA3GX3R3U . You are receiving this because you authored the thread.Message ID: @.***>

mlaszko commented 1 year ago

Where is that template and how to use it?

I tried setting device='auto' in run_localGPT.py and got error.

            model = AutoGPTQForCausalLM.from_quantized(
                model_id,
                model_basename=model_basename,
                use_safetensors=True,
                trust_remote_code=True,
                device="auto",
                use_triton=False,
                quantize_config=None,
            )
python run_localGPT_API.py
load INSTRUCTOR_Transformer
max_seq_length  512
WARNING:auto_gptq.nn_modules.qlinear_old:CUDA extension not installed.
Traceback (most recent call last):
  File "//run_localGPT_API.py", line 67, in <module>
    LLM = load_model(device_type=DEVICE_TYPE, model_id=MODEL_ID, model_basename=MODEL_BASENAME)
  File "/run_localGPT.py", line 79, in load_model
    model = AutoGPTQForCausalLM.from_quantized(
  File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/auto.py", line 82, in from_quantized
    return quant_func(
  File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py", line 753, in from_quantized
    device = torch.device(device)
RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: auto
sauravm8 commented 1 year ago

Please tell the cuda, pytorch, transformers and autogpt versions.

On Wed, 13 Sept 2023, 2:31 pm mlaszko, @.***> wrote:

Where is that template and how to use it?

I tried setting device='auto' in run_localGPT.py and got error.

        model = AutoGPTQForCausalLM.from_quantized(
            model_id,
            model_basename=model_basename,
            use_safetensors=True,
            trust_remote_code=True,
            device="auto",
            use_triton=False,
            quantize_config=None,
        )

python run_localGPT_API.py load INSTRUCTOR_Transformer max_seq_length 512 WARNING:auto_gptq.nn_modules.qlinear_old:CUDA extension not installed. Traceback (most recent call last): File "//run_localGPT_API.py", line 67, in LLM = load_model(device_type=DEVICE_TYPE, model_id=MODEL_ID, model_basename=MODEL_BASENAME) File "/run_localGPT.py", line 79, in load_model model = AutoGPTQForCausalLM.from_quantized( File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/auto.py", line 82, in from_quantized return quant_func( File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py", line 753, in from_quantized device = torch.device(device) RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: auto

— Reply to this email directly, view it on GitHub https://github.com/PromtEngineer/localGPT/issues/348#issuecomment-1717231275, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE46VG63LWIQ25F3O6NGXYDX2FY6JANCNFSM6AAAAAA3GX3R3U . You are receiving this because you authored the thread.Message ID: @.***>

mlaszko commented 1 year ago

CUDA Version: 11.7 torch 2.0.1 transformers 4.33.1 auto-gptq 0.2.2

sauravm8 commented 1 year ago

Update AutoGPTQ to greater than 0.42 and then check TheBloke's page for any GPTQ model and see how he is loading them.

Using this new code load the model and add that object to the HF pipeline.

On Wed, 13 Sept 2023, 4:24 pm mlaszko, @.***> wrote:

CUDA Version: 11.7 torch 2.0.1 transformers 4.33.1 auto-gptq 0.2.2

— Reply to this email directly, view it on GitHub https://github.com/PromtEngineer/localGPT/issues/348#issuecomment-1717401022, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE46VG66CPDQOD3TBNSQQHLX2GGGDANCNFSM6AAAAAA3GX3R3U . You are receiving this because you authored the thread.Message ID: @.***>

sauravm8 commented 1 year ago

This might also have been solved by the latest commit, check that as well.

On Wed, 13 Sept 2023, 5:05 pm Saurav Mukherjee, < @.***> wrote:

Update AutoGPTQ to greater than 0.42 and then check TheBloke's page for any GPTQ model and see how he is loading them.

Using this new code load the model and add that object to the HF pipeline.

On Wed, 13 Sept 2023, 4:24 pm mlaszko, @.***> wrote:

CUDA Version: 11.7 torch 2.0.1 transformers 4.33.1 auto-gptq 0.2.2

— Reply to this email directly, view it on GitHub https://github.com/PromtEngineer/localGPT/issues/348#issuecomment-1717401022, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE46VG66CPDQOD3TBNSQQHLX2GGGDANCNFSM6AAAAAA3GX3R3U . You are receiving this because you authored the thread.Message ID: @.***>

mlaszko commented 1 year ago

With AutoGPTQ 0.42 I have error:

python run_localGPT.py
2023-09-13 12:25:57,631 - INFO - run_localGPT.py:180 - Running on: cuda
2023-09-13 12:25:57,631 - INFO - run_localGPT.py:181 - Display Source Documents set to: False
2023-09-13 12:25:57,837 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length  512
2023-09-13 12:26:01,308 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-13 12:26:01,380 - INFO - run_localGPT.py:45 - Loading Model: TheBloke/Llama-2-70B-chat-GPTQ, on: cuda
2023-09-13 12:26:01,381 - INFO - run_localGPT.py:46 - This action can take a few minutes!
2023-09-13 12:26:01,381 - INFO - run_localGPT.py:68 - Using AutoGPTQForCausalLM for quantized models
2023-09-13 12:26:01,699 - INFO - run_localGPT.py:75 - Tokenizer loaded
2023-09-13 12:26:02,440 - INFO - _base.py:827 - lm_head not been quantized, will be ignored when make_quant.
Traceback (most recent call last):
  File "//run_localGPT.py", line 246, in <module>
    main()
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "//run_localGPT.py", line 209, in main
    llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME)
  File "//run_localGPT.py", line 77, in load_model
    model = AutoGPTQForCausalLM.from_quantized(
  File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/auto.py", line 108, in from_quantized
    return quant_func(
  File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py", line 902, in from_quantized
    cls.fused_attn_module_type.inject_to_model(
  File "/usr/local/lib/python3.10/dist-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 163, in inject_to_model
    raise ValueError("Exllama kernel does not support query/key/value fusion with act-order. Please either use inject_fused_attention=False or disable_exllama=True.")
ValueError: Exllama kernel does not support query/key/value fusion with act-order. Please either use inject_fused_attention=False or disable_exllama=True.
            model = AutoGPTQForCausalLM.from_quantized(
                model_id,
                model_basename=model_basename,
                use_safetensors=True,
                trust_remote_code=True,
                #device="auto",
                use_triton=False,
                quantize_config=None,
            )
sauravm8 commented 1 year ago

Try loading a GPTQ model in isolation using the bloke template and see if it is giving the same error.

On Wed, 13 Sept 2023, 6:11 pm mlaszko, @.***> wrote:

With AutoGPTQ 0.42 I have error:

python run_localGPT.py 2023-09-13 12:25:57,631 - INFO - run_localGPT.py:180 - Running on: cuda 2023-09-13 12:25:57,631 - INFO - run_localGPT.py:181 - Display Source Documents set to: False 2023-09-13 12:25:57,837 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-09-13 12:26:01,308 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-09-13 12:26:01,380 - INFO - run_localGPT.py:45 - Loading Model: TheBloke/Llama-2-70B-chat-GPTQ, on: cuda 2023-09-13 12:26:01,381 - INFO - run_localGPT.py:46 - This action can take a few minutes! 2023-09-13 12:26:01,381 - INFO - run_localGPT.py:68 - Using AutoGPTQForCausalLM for quantized models 2023-09-13 12:26:01,699 - INFO - run_localGPT.py:75 - Tokenizer loaded 2023-09-13 12:26:02,440 - INFO - _base.py:827 - lm_head not been quantized, will be ignored when make_quant. Traceback (most recent call last): File "//run_localGPT.py", line 246, in main() File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke return __callback(args, **kwargs) File "//run_localGPT.py", line 209, in main llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME) File "//run_localGPT.py", line 77, in load_model model = AutoGPTQForCausalLM.from_quantized( File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/auto.py", line 108, in from_quantized return quant_func( File "/usr/local/lib/python3.10/dist-packages/auto_gptq/modeling/_base.py", line 902, in from_quantized cls.fused_attn_module_type.inject_to_model( File "/usr/local/lib/python3.10/dist-packages/auto_gptq/nn_modules/fused_llama_attn.py", line 163, in inject_to_model raise ValueError("Exllama kernel does not support query/key/value fusion with act-order. Please either use inject_fused_attention=False or disable_exllama=True.") ValueError: Exllama kernel does not support query/key/value fusion with act-order. Please either use inject_fused_attention=False or disable_exllama=True.

        model = AutoGPTQForCausalLM.from_quantized(
            model_id,
            model_basename=model_basename,
            use_safetensors=True,
            trust_remote_code=True,
            #device="auto",
            use_triton=False,
            quantize_config=None,
        )

— Reply to this email directly, view it on GitHub https://github.com/PromtEngineer/localGPT/issues/348#issuecomment-1717554074, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE46VG25H3RALXBRBBTNRDDX2GSWXANCNFSM6AAAAAA3GX3R3U . You are receiving this because you authored the thread.Message ID: @.***>

zzadiues commented 1 year ago

remove device = 'auto' instead use device_map='auto'

mlaszko commented 1 year ago

I fixed it by changing in load_models.py

    model = AutoGPTQForCausalLM.from_quantized(
        model_id,
        model_basename=model_basename,
        use_safetensors=True,
        trust_remote_code=True,
        device_map="auto",
        use_triton=False,
        quantize_config=None,
    )

to

    model = AutoModelForCausalLM.from_pretrained(model_id,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             revision="main")