FlagOpen / FlagEmbedding

Retrieval and Retrieval-augmented LLMs
MIT License
7.35k stars 532 forks source link

General purpose fine-tuning复现效果不及预期的请教 #181

Open stanpcf opened 1 year ago

stanpcf commented 1 year ago

你好,我这边在尝试复现General purpose fine-tuning这个过程,但是效果不及预期,请教一下是否是一些参数设置不对: 复现(BGE-i w.o. pre-train)过程(配置32卡a100 80G)

  1. 运行参数:

    deepspeed --hostfile='hostfile_nnode4' \
    --module FlagEmbedding.baai_general_embedding.finetune.run \
    --model_name_or_path hfl/chinese-roberta-wwm-ext-large \
    --train_data /es01/hdd/baichuan/wangyiding/data/MTP/unlabeled/shard_noneg \
    --learning_rate 1e-5 \
    --num_train_epochs 3 \
    --per_device_train_batch_size 800 \
    --dataloader_drop_last True \
    --max_example_num_per_dataset 10000000000 \
    --normlized True \
    --temperature 0.02 \
    --query_max_len 128 \
    --passage_max_len 512 \
    --train_group_size 1 \
    --negatives_cross_device \
    --deepspeed ds_config.json \
    --gradient_checkpointing \
    --fp16 \
    --output_dir output/finetune/chinese-roberta-wwm-ext-large/mtp/unlabel
  2. deepspeed config参数:

    {
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 12,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    
    "bf16": {
        "enabled": "auto"
    },
    
    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },
    
    "scheduler": {
        "type": "WarmupDecayLR",
        "params": {
            "warmup_min_lr": "auto",
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto",
            "total_num_steps": "auto"
        }
    },
    
    "zero_optimization": {
        "stage": 1
    },
    
    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 100,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
    }
  3. 调参了bs和passage_max_len实验结果如下: Overall

Model Retrieval STS PairClassification Classification Reranking Clustering Avg
M3E (large) 54.75 50.42 64.30 68.20 59.66 48.88 57.66
BGE-i 63.90 47.71 61.67 68.59 60.12 47.73 59.00
BGE-i w.o. pre-train 62.56 48.06 61.66 67.89 61.25 46.82 58.62
BGE-f(bge-large-zh-v1.0) 71.53 54.98 78.94 68.32 65.11 48.39 63.96
bge-large-zh-v1.5 70.09 54.44 81.6 72.88 65.72 48.95 64.55
实验1(bs=800*32, p_max_len=512) 59.52 42.49 62.58 73.0 59.95 48.6 57.13
实验2(bs=1200*32, p_max_len=128, max_len(inference) 59.19 42.38 62.34 72.58 59.87 48.57 56.91
实验2(bs=1200*32, p_max_len=128, max_len(inference)=128 57.58 42.38 62.34 72.53 60.05 48.69 56.52

我理解我的实验是对齐BGE-i w.o. pre-train这个指标(avg=58.62),从迭代过程中看指标离58.62很远(比如p_max_len为512的第一个dump模型指标是55.49,该模型过了10%数据;最后的dump模型指标才57.13)。

staoxiao commented 1 year ago

train_group_size最小应该为2

stanpcf commented 1 year ago

train_group_size最小应该为2

复现BGE-i w.o. pre-train这个模型应该为几呢。另外C-PACK论文里说这阶段只用了in-batch negative,所以我在这里设置的为1(意思是不在json的neg里面取负样本)

staoxiao commented 1 year ago

这部分数据的neg基本也是随机采样得到的,实验中我们train_group_size用的最小值2,没用试过train_group_size=1的情况。

lucifar777 commented 6 months ago

train_group_size最小应该为2

请问微调时,per_device_train_batch_size越大越好吗,尽量把显存打满?base/large模型是否都如此?