SolidRusT / srt-model-quantizing

Collection of scripts for quantizing data models
MIT License
0 stars 1 forks source link

Llama 3.1 Quantization - Expected all tensors to be on the same device #3

Closed vackosar closed 1 month ago

vackosar commented 1 month ago

I wanted to quantize model_name = "cognitivecomputations/dolphin-2.9.4-llama3.1-8b" But i am getting an error:

import os
os.environ['model_name'] = model_name
model_name_awq = model_name.split('/')[1] + '-AWQ'
os.environ['model_name_awq'] = model_name_awq
!mkdir tmp
!python srt-model-quantizing/awq/run-quant-awq.py --model_path $model_name --quant_path ./tmp/$model_name_awq --zero_point True --q_group_size 128 --w_bit 4 --version GEMM

Output

....
Repo card metadata block was not found. Setting CardData to empty.
Downloading data: 100% 471M/471M [00:06<00:00, 71.0MB/s]
Generating validation split: 100% 214670/214670 [00:20<00:00, 10520.61 examples/s]
Traceback (most recent call last):
  File "/content/srt-model-quantizing/awq/run-quant-awq.py", line 41, in <module>
    quantize_model(args.model_path, args.quant_path, quant_config)
  File "/content/srt-model-quantizing/awq/run-quant-awq.py", line 13, in quantize_model
    model.quantize(tokenizer, quant_config=quant_config)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/awq/models/base.py", line 213, in quantize
    self.quantizer = AwqQuantizer(
  File "/usr/local/lib/python3.10/dist-packages/awq/quantize/quantizer.py", line 69, in __init__
    self.modules, self.module_kwargs, self.inps = self.init_quant(
  File "/usr/local/lib/python3.10/dist-packages/awq/quantize/quantizer.py", line 570, in init_quant
    self.model(samples.to(next(self.model.parameters()).device))
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 1189, in forward
    outputs = self.model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 977, in forward
    position_embeddings = self.rotary_emb(hidden_states, position_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 209, in forward
    freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_bmm)

How to fix?

suparious commented 1 month ago

I had to do some weird stuff like:

# Move inputs to the same device as the model
device = next(model.parameters()).device
inputs = {k: v.to(device) for k, v in inputs.items()}

Seems the AWQ part of this repo is workign again.