oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.15k stars 5.26k forks source link

After the latest updates I cannot run llama-13b-4bit #421

Closed AndreyRGW closed 1 year ago

AndreyRGW commented 1 year ago
Exception in thread Thread-3 (gentask):
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "F:\WBC\text-generation-webui\modules\callbacks.py", line 65, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "F:\WBC\text-generation-webui\modules\text_generation.py", line 199, in generate_with_callback
    shared.model.generate(**kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1425, in generate
    return self.contrastive_search(
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1833, in contrastive_search
    outputs = self(
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 765, in forward
    outputs = self.model(
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 614, in forward
    layer_outputs = decoder_layer(
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 309, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 209, in forward
    query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "F:\WBC\text-generation-webui\repositories\GPTQ-for-LLaMa\quant.py", line 198, in forward
    quant_cuda.vecquant4matmul(x, self.qweight, y, self.scales, self.zeros)
AttributeError: module 'quant_cuda' has no attribute 'vecquant4matmul'

Back to commit 9256e937d6e7d34c539b99bcb35183d9cf6fe157 no such errors there

IngwiePhoenix commented 1 year ago

I just keep getting LLaMATokenizer as an error...

Got any idea how I get past that?

AndreyRGW commented 1 year ago

I just keep getting LLaMATokenizer as an error...

Got any idea how I get past that?

I think you can try changing LLaMATokenizer to LlamaTokenizer in the tokenizer_config.json file, which is in the folder with your model.

IngwiePhoenix commented 1 year ago

That did it, thanks!

AndreyRGW commented 1 year ago
Error ``` Exception in thread Thread-3 (gentask): Traceback (most recent call last): File "C:\ProgramData\Anaconda3\envs\textgen\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\ProgramData\Anaconda3\envs\textgen\lib\threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "F:\WBC\text-generation-webui\modules\callbacks.py", line 65, in gentask ret = self.mfunc(callback=_callback, **self.kwargs) File "F:\WBC\text-generation-webui\modules\text_generation.py", line 199, in generate_with_callback shared.model.generate(**kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1425, in generate return self.contrastive_search( File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\generation\utils.py", line 1833, in contrastive_search outputs = self( File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 765, in forward outputs = self.model( File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 614, in forward layer_outputs = decoder_layer( File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 309, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 209, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\textgen\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "F:\WBC\text-generation-webui\repositories\GPTQ-for-LLaMa\quant.py", line 198, in forward quant_cuda.vecquant4matmul(x, self.qweight, y, self.scales, self.zeros) AttributeError: module 'quant_cuda' has no attribute 'vecquant4matmul' ```

To fix this error I just decided to completely reinstall my conda environment and surprisingly it worked. For now - closing the issue.