johnsmith0031 / alpaca_lora_4bit

MIT License
535 stars 84 forks source link

RuntimeError: expected scalar type Float but found Half #38

Open ehartford opened 1 year ago

ehartford commented 1 year ago

Sorry for all the noise. I'm kinda out of my depth here. It seems like maybe I have a config error or something. Anyone recognize this?

Exception in thread Thread-3 (gentask):
Traceback (most recent call last):
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/eric/git/alpaca_lora_4bit/text-generation-webui/modules/callbacks.py", line 63, in gentask
    ret = self.mfunc(callback=_callback, **self.kwargs)
  File "/home/eric/git/alpaca_lora_4bit/text-generation-webui/modules/text_generation.py", line 222, in generate_with_callback
    shared.model.generate(**kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/peft/peft_model.py", line 581, in generate
    outputs = self.base_model.generate(**kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/generation/utils.py", line 1462, in generate
    return self.sample(
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/generation/utils.py", line 2478, in sample
    outputs = self(
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 710, in forward
    outputs = self.model(
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 598, in forward
    layer_outputs = decoder_layer(
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 313, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 214, in forward
    query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/eric/miniconda3/envs/textgen2/lib/python3.10/site-packages/peft/tuners/lora.py", line 686, in forward
    result = super().forward(x)
  File "/home/eric/git/alpaca_lora_4bit/text-generation-webui/autograd_4bit.py", line 63, in forward
    out = mm4b.matmul4bit(x, self.qweight, self.scales,
  File "/home/eric/git/alpaca_lora_4bit/text-generation-webui/matmul_utils_4bit.py", line 109, in matmul4bit
    output = _matmul4bit_v1_recons(x, qweight, scales, zeros)
  File "/home/eric/git/alpaca_lora_4bit/text-generation-webui/matmul_utils_4bit.py", line 81, in _matmul4bit_v1_recons
    output = torch.matmul(x, buffer)
RuntimeError: expected scalar type Float but found Half
ehartford commented 1 year ago

Applying the PR fixed it, I am up and running. Slow but it works! https://github.com/johnsmith0031/alpaca_lora_4bit/pull/37/files

schwab commented 1 year ago

How did you apply that fix? Does it work on the Dockerfile? I tried editing the file directly and rebuilding the docker file, but it still gives this error. I wonder if there's a plan to merge this fix?