SAI990323 / TALLRec

Apache License 2.0
190 stars 31 forks source link

evaluate error #25

Closed LanPangxiang closed 10 months ago

LanPangxiang commented 11 months ago

Why do I get this error when I execute evaluate.py?

Traceback (most recent call last): File "evaluate.py", line 231, in fire.Fire(main) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(varargs, kwargs) File "evaluate.py", line 193, in main output, logit = evaluate(instructions, inputs) File "evaluate.py", line 154, in evaluate generation_output = model.generate( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/peft/peft_model.py", line 731, in generate outputs = self.base_model.generate(kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/generation/utils.py", line 1437, in generate return self.greedy_search( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/generation/utils.py", line 2248, in greedy_search outputs = self( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(args, kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 687, in forward outputs = self.model( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 577, in forward layer_outputs = decoder_layer( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 196, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/lpx-llm/lib/python3.8/site-packages/peft/tuners/lora.py", line 565, in forward result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias) RuntimeError: "addmm_implcpu" not implemented for 'Half'

SAI990323 commented 11 months ago

It seems that the problem comes from u use the 16bits on cpu, which is not supported by bitsandbytes