AutoGPTQ / AutoGPTQ

An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
MIT License
4.3k stars 459 forks source link

Will AutoGPTQ support Lora traning for llama2? #224

Open wudijimao opened 1 year ago

wudijimao commented 1 year ago

I try train lora with AutoGPTQ v3.0. And I got error: Exception in thread Thread-17 (threaded_run): Traceback (most recent call last): File "E:\chat\text-generation-webui\conda\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "E:\chat\text-generation-webui\conda\lib\threading.py", line 953, in run self._target(self._args, self._kwargs) File "E:\chat\text-generation-webui\modules\training.py", line 665, in threaded_run trainer.train() File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\trainer.py", line 1539, in train return inner_training_loop( File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\trainer.py", line 1809, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\trainer.py", line 2654, in training_step loss = self.compute_loss(model, inputs) File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\trainer.py", line 2679, in compute_loss outputs = model(inputs) File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\accelerate\utils\operations.py", line 581, in forward return model_forward(*args, *kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\accelerate\utils\operations.py", line 569, in call return convert_to_fp32(self.model_forward(args, kwargs)) File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast return func(*args, kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\peft\peft_model.py", line 947, in forward return self.base_model( File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\auto_gptq\modeling_base.py", line 433, in forward return self.model(args, kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\models\llama\modeling_llama.py", line 806, in forward outputs = self.model( File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\models\llama\modeling_llama.py", line 693, in forward layer_outputs = decoder_layer( File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\transformers\models\llama\modeling_llama.py", line 408, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\auto_gptq\nn_modules\fused_llama_attn.py", line 53, in forward qkv_states = self.qkv_proj(hidden_states) File "E:\chat\text-generation-webui\conda\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "E:\chat\text-generation-webui\conda\lib\site-packages\peft\tuners\lora.py", line 840, in forward result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias) RuntimeError: self and mat2 must have the same dtype

RonanKMcGovern commented 1 year ago

Yes, I'd be interested in doing PEFT and LoRa. What would need to be updated @PanQiWei to make this work?

RonanKMcGovern commented 1 year ago

I found this notebook - haven't tried it yet, but don't see - on the surface - why this isn't what's needed.