AkariAsai / self-rag

This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
https://selfrag.github.io/
MIT License
1.76k stars 162 forks source link

An Error when FT Llama2 #24

Closed Cheung-Z closed 10 months ago

Cheung-Z commented 10 months ago

Hi, @AkariAsai thx for opening source. I ran the ft script based on Llama-2-7b-chat-hf and 8*A800 GPUs, I only modified the training params and did not change the training code,but i've got an unexpected error.

File "/mnt/data/anaconda3/envs/baichuan2/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 848, in forward
    shift_logits = shift_logits.view(-1, self.config.vocab_size)
RuntimeError: shape '[-1, 0]' is invalid for input of size 262018944
    shift_logits = shift_logits.view(-1, self.config.vocab_size)    
shift_logits = shift_logits.view(-1, self.config.vocab_size)
RuntimeError: RuntimeErrorshape '[-1, 0]' is invalid for input of size 262018944: 
shape '[-1, 0]' is invalid for input of size 262018944

Here is the FT script:

MODEL_SIZE=7B
NUM_GPUS=8
BATCH_SIZE_PER_GPU=8
TOTAL_BATCH_SIZE=128
GRADIENT_ACC_STEPS=$(($TOTAL_BATCH_SIZE/$NUM_GPUS/$BATCH_SIZE_PER_GPU))
echo "Training llama model ${MODEL_SIZE} using $NUM_GPUS GPUs, $BATCH_SIZE_PER_GPU batch size per GPU, $GRADIENT_ACC_STEPS gradient accumulation steps"

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch \
    --mixed_precision bf16 \
    --num_machines 1 \
    --num_processes $NUM_GPUS \
    --use_deepspeed \
    --deepspeed_config_file stage3_no_offloading_accelerate.conf \
    finetune.py \
    --model_name_or_path /mnt/model/Llama-2-7b-chat-hf \
    --use_flash_attn \
    --tokenizer_name /mnt/model/Llama-2-7b-chat-hf \
    --use_slow_tokenizer \
    --train_file train.jsonl \
    --max_seq_length 1024 \
    --preprocessing_num_workers 16 \
    --per_device_train_batch_size $BATCH_SIZE_PER_GPU \
    --gradient_accumulation_steps $GRADIENT_ACC_STEPS \
    --learning_rate 2e-5 \
    --lr_scheduler_type linear \
    --warmup_ratio 0.03 \
    --weight_decay 0. \
    --num_train_epochs 5 \
    --output_dir output/adaptive_${MODEL_SIZE}/ \
    --with_tracking \
    --report_to tensorboard \
    --logging_steps 1 \
    --use_special_tokens

training.jsonl is download from https://huggingface.co/datasets/selfrag/selfrag_train_data/tree/main

Cheung-Z commented 10 months ago

oh, i solved the error by adding a line of code. if len(tokenizer) > embedding_size: model.resize_token_embeddings(len(tokenizer)) (+)model.config.vocab_size = len(tokenizer)

AkariAsai commented 10 months ago

Thank you so much for reporting the issue! Hm I though the resize_token_embeddings function will automatically update the model config but I might be wrong. Glad you found out the fix!