Closed aniketde closed 1 year ago
We used the original model provided by Meta AI and the code for converting the model to huggingface format mentioned in the alpaca-lora repository. We have not tried to use the model provided by huggingface directly. We will try to reproduce this problem in our free time.
Got it! Was './alpaca-lora-7b' taken from hugging face or was it retrained with alpaca-lora repo?
Got it! Was './alpaca-lora-7b' taken from hugging face or was it retrained with alpaca-lora repo?
We retrained with alpaca-lora repo.
We have tried to use the checkpoints from the Hugging Face and it works fine. We suspect it might be an issue with your environment
明白了!'./alpaca-lora-7b' 是从 hugging face 中取出的,还是用 alpaca-lora repo 重新训练的?
I have the same problem, did you done it?thank you
Hi, I saved the model from 1) https://huggingface.co/decapoda-research/llama-7b-hf and 2) https://huggingface.co/tloen/alpaca-lora-7b
I use the following bash script to train the model: `echo $1, $2 seed=$2 output_dir='./results' base_model='./llama-7b-hf' train_data='./data/movie/train.json' val_data='./data/movie/valid.json' instruction_model='./alpaca-lora-7b'
lr=1e-4 dropout=0.05 sample=32 mkdir -p $output_dir echo "lr: $lr, dropout: $dropout , seed: $seed, sample: $sample" CUDA_VISIBLE_DEVICES=$1 python -u finetune_rec.py \ --base_model $base_model \ --train_data_path $train_data \ --val_data_path $val_data \ --output_dir $outputdir$seed_$sample \ --batch_size 128 \ --micro_batch_size 64 \ --num_epochs 10 \ --learning_rate $lr \ --cutoff_len 512 \ --lora_r 16 \ --lora_alpha 16\ --lora_dropout $dropout \ --lora_target_modules '[q_proj,v_proj]' \ --train_on_inputs \ --group_by_length \ --resume_from_checkpoint $instruction_model \ --sample $sample \ --seed $2`
Traceback (most recent call last): File "/home/user/project/TALLRec/finetune_rec.py", line 325, in
fire.Fire(train)
File "/home/user/anaconda3/lib/python3.10/site-packages/fire/core.py", line141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/user/anaconda3/lib/python3.10/site-packages/fire/core.py", line475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/user/anaconda3/lib/python3.10/site-packages/fire/core.py", line691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/user/project/TALLRec/finetune_rec.py", line 210, in train
model.print_trainable_parameters() # Be more transparent about the % of trainable params.
AttributeError: 'NoneType' object has no attribute 'print_trainable_parameters
Please let me know if I could do something to resolve this.