QwenLM / Qwen2

Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud.
6k stars 336 forks source link

qwen2量化的模型微调,loss不收敛 #676

Open DuBaiSheng opened 2 weeks ago

DuBaiSheng commented 2 weeks ago

请问一下。分别使用了 qwen2-7B-instruct-AWQ 和qwen2-7B-instruct-GPTQ-int4 两个量化模型进行lora微调,loss 都不收敛。learning-rate 几步之后,就不变了。尝试修改learning-rate、lora-rank 都没有用。 同样的数据,采用qwen2-7B-instruct lora微调能正常收敛。

jklj077 commented 2 weeks ago

Hi, could you provide the command or script starting the finetuning?

DuBaiSheng commented 1 week ago
#!/bin/bash
export CUDA_DEVICE_MAX_CONNECTIONS=1
export NCCL_IB_DISABLE=1;
export NCCL_P2P_DISABLE=1

DIR=`pwd`

GPUS_PER_NODE=1

NNODES=${NNODES:-1}

NODE_RANK=${NODE_RANK:-0}

MASTER_ADDR=${MASTER_ADDR:-localhost}

MASTER_PORT=${MASTER_PORT:-6001}

MODEL="/data_nvme/common_data/common_model/Qwen/Qwen2-7B-Instruct"  # Set the path if you do not want to load from huggingface directly
DATA="/nfs_nvme/dubs/common_data/qwen_data/qwen2_summary_240617.jsonl"
DS_CONFIG_PATH="ds_config_zero3.json"
USE_LORA=True
Q_LORA=False

function usage() {
    echo '
Usage: bash finetune/finetune_lora_ds.sh [-m MODEL_PATH] [-d DATA_PATH] [--deepspeed DS_CONFIG_PATH] [--use_lora USE_LORA] [--q_lora Q_LORA]
'
}

while [[ "$1" != "" ]]; do
    case $1 in
        -m | --model )
            shift
            MODEL=$1
            ;;
        -d | --data )
            shift
            DATA=$1
            ;;
        --deepspeed )
            shift
            DS_CONFIG_PATH=$1
            ;;
        --use_lora  )
            shift
            USE_LORA=$1
            ;;
        --q_lora    )
            shift
            Q_LORA=$1
            ;;
        -h | --help )
            usage
            exit 0
            ;;
        * )
            echo "Unknown argument ${1}"
            exit 1
            ;;
    esac
    shift
done

DISTRIBUTED_ARGS="
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --node_rank $NODE_RANK \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT
"

torchrun $DISTRIBUTED_ARGS finetune.py \
    --model_name_or_path $MODEL \
    --data_path $DATA \
    --fp16 True \
    --output_dir output/lora_7b_int4_0617 \
    --num_train_epochs 10 \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 200 \
    --save_total_limit 10 \
    --learning_rate 3e-4 \
    --weight_decay 0.01 \
    --adam_beta2 0.95 \
    --warmup_ratio 0.01 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --report_to "none" \
    --model_max_length 2024 \
    --lazy_preprocess True \
    --use_lora ${USE_LORA} \
    --q_lora ${Q_LORA} \
    --gradient_checkpointing \
    --deepspeed ${DS_CONFIG_PATH}
MikeJackOne commented 1 week ago

Hi, could you provide the command or script starting the finetuning?

`trainer = SFTTrainer( model = model, tokenizer = tokenizer, train_dataset = filtered_dataset, dataset_text_field = "text1", max_seq_length = 1200, dataset_num_proc = 2, packing = False, peft_config=peft_conf, args = TrainingArguments( per_device_train_batch_size = 1, gradient_accumulation_steps = 64, warmup_steps = 5,

max_steps = 60,

    num_train_epochs = 10,
    learning_rate = 2e-4,
    # fp16 = not torch.cuda.is_bf16_supported(),
    # bf16 = torch.cuda.is_bf16_supported(),
    logging_steps = 1,
    optim = "adamw_8bit",
    weight_decay = 0.01,
    lr_scheduler_type = "constant_with_warmup",
    seed = 3407,
    output_dir = "outputs",
    report_to = "wandb",
),

)`

I am encountering an issue with finetuning the Qwen-2-7B model in 4-bit precision. The rate of loss decrease is slower than expected.

I finetuned the model using a private dataset with over 500 rows. I used the same parameters and dataset to finetune other models in 4-bit precision, such as GLM-4, and they performed perfectly, with the loss decreasing gradually as expected.

The following image shows the statistics during the finetuning of Qwen-2-7B. I suspect that after quantization, the model isn't convex enough for effective finetuning. I haven't experimented with the 16-bit version due to my GPU's 24GB memory limitation.

image