AGI-Edgerunners / LLM-Adapters

Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
https://arxiv.org/abs/2304.01933
Apache License 2.0
1.02k stars 92 forks source link

error while running evaluate.py #3

Open sheli00 opened 1 year ago

sheli00 commented 1 year ago

After finetuning with the example code, i try to reproduce the evaluate reslut, the ran into this error, how can i fix it. finetune code: WORLD_SIZE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=3192 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'math_data.json' --output_dir './trained_models/llama-lora' --batch_size 4 --micro_batch_size 1 --num_epochs 3 --learning_rate 3e-4 --cutoff_len 256 --val_set_size 120 --adapter_name lora evaluate code: CUDA_VISIBLE_DEVICES=0 python evaluate.py --model LLaMA-7B --adapter LoRA --dataset SVAMP --base_model 'decapoda-research/llama-7b-hf' --lora_weights './trained_models/llama-lora' Traceback (most recent call last): File "evaluate.py", line 283, in fire.Fire(main) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/fire/core.py", line 480, in _Fire target=component.name) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(varargs, kwargs) File "evaluate.py", line 93, in main outputs = evaluate(instruction) File "evaluate.py", line 61, in evaluate max_new_tokens=max_new_tokens, File "/home/root1/zlj/LLM-Adapters/peft/src/peft/peft_model.py", line 584, in generate outputs = self.base_model.generate(kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/generation/utils.py", line 1534, in generate model_kwargs, File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/generation/utils.py", line 2814, in beam_search output_hidden_states=output_hidden_states, File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, *kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/models/llama/modeling_llama.py", line 696, in forward return_dict=return_dict, File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/models/llama/modeling_llama.py", line 583, in forward use_cache=use_cache, File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, *kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/models/llama/modeling_llama.py", line 298, in forward use_cache=use_cache, File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/transformers/models/llama/modeling_llama.py", line 196, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(args, kwargs) File "/home/root1/zlj/LLM-Adapters/peft/src/peft/tuners/lora.py", line 522, in forward result = super().forward(x) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/bitsandbytes/nn/modules.py", line 242, in forward out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul return MatMul8bitLt.apply(A, B, out, bias, state) File "/home/root1/software/miniconda3/envs/llm/lib/python3.7/site-packages/bitsandbytes/autograd/_functions.py", line 360, in forward outliers = state.CB[:, state.idx.long()].clone() TypeError: 'NoneType' object is not subscriptable

HZQ950419 commented 1 year ago

Hi, The code works fine on my side. Can you try again with an empty GPU (no process running on it)?