FlagOpen / FlagEmbedding

Retrieval and Retrieval-augmented LLMs
MIT License
6.88k stars 498 forks source link

微调bge-reranker-v2-m3准确率下降 #858

Closed moon-fall closed 3 months ago

moon-fall commented 3 months ago

直接使用官方样例数据 https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/llm_reranker/toy_finetune_data.jsonl 对 bge-reranker-v2-m3 进行微调 命令为

torchrun --nproc_per_node 1 -m FlagEmbedding.llm_reranker.finetune_for_instruction.run --output_dir /home/lf/models/bge-reranker-v2-m3-finetune --model_name_or_path /home/lf/models/bge-reranker-v2-m3 --train_data /home/lf/data/toy_finetune_data.jsonl --learning_rate 2e-4 --num_train_epochs 1 --per_device_train_batch_size 1 --gradient_accumulation_steps 16 --dataloader_drop_last True --query_max_len 512 --passage_max_len 512 --train_group_size 16 --logging_steps 1 --save_steps 2000 --save_total_limit 50 --ddp_find_unused_parameters False --gradient_checkpointing --deepspeed stage1.json --warmup_ratio 0.1 --bf16 --use_lora False --use_flash_attn False

日志:

/home/narwal/.local/lib/python3.11/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations warnings.warn( [2024-06-04 14:37:53,116] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible [2024-06-04 14:37:54,829] [INFO] [comm.py:637:init_distributed] cdb=None [2024-06-04 14:37:54,830] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 06/04/2024 14:37:54 - WARNING - main - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: False 06/04/2024 14:37:54 - INFO - main - Training/evaluation parameters RetrieverTrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=True, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=False, ddp_timeout=1800, debug=[], deepspeed=stage1.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_steps=None, eval_strategy=IntervalStrategy.NO, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0002, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/home/lf/models/bge-reranker-v2-m3-finetune/runs/Jun04_14-37-52_pro-k8s-gpu-171, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1.0, logging_strategy=IntervalStrategy.STEPS, loss_type=only logits, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1.0, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=/home/lf/models/bge-reranker-v2-m3-finetune, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=[], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=/home/lf/models/bge-reranker-v2-m3-finetune, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=2000, save_strategy=IntervalStrategy.STEPS, save_total_limit=50, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 06/04/2024 14:37:54 - INFO - main - Model parameters ModelArguments(model_name_or_path='/home/lf/models/bge-reranker-v2-m3', peft_model_path='', config_name=None, tokenizer_name=None, use_lora=False, lora_rank=64, lora_alpha=16, lora_dropout=0.1, target_modules=['q_proj', 'v_proj', 'o_proj', 'down_proj', 'up_proj', 'gate_proj'], save_merged_lora_model=False, use_flash_attn=False, use_slow_tokenizer=False, low_cpu_mem_usage=False, cache_dir='tmp', token=None, from_peft=None, lora_extra_parameters=None) 06/04/2024 14:37:54 - INFO - main - Data parameters DataArguments(train_data='/home/lf/data/toy_finetune_data.jsonl', train_group_size=16, query_max_len=512, passage_max_len=512, max_example_num_per_dataset=100000000, query_instruction_for_retrieval='A: ', passage_instruction_for_retrieval='B: ', cache_path='./data_dir', load_from_disk=False, load_disk_path=None, save_to_disk=False, save_disk_path=None, num_shards=0, save_max_shard_size='50GB', exit_after_save=False) If you want to use XLMRobertaLMHeadModel as a standalone, add is_decoder=True. Some weights of XLMRobertaForCausalLM were not initialized from the model checkpoint at /home/lf/models/bge-reranker-v2-m3 and are newly initialized: ['lm_head.bias', 'lm_head.decoder.bias', 'lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. XLMRobertaForCausalLM( (roberta): XLMRobertaModel( (embeddings): XLMRobertaEmbeddings( (word_embeddings): Embedding(250002, 1024, padding_idx=1) (position_embeddings): Embedding(8194, 1024, padding_idx=1) (token_type_embeddings): Embedding(1, 1024) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): XLMRobertaEncoder( (layer): ModuleList( (0-23): 24 x XLMRobertaLayer( (attention): XLMRobertaAttention( (self): XLMRobertaSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): XLMRobertaSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): XLMRobertaIntermediate( (dense): Linear(in_features=1024, out_features=4096, bias=True) (intermediate_act_fn): GELUActivation() ) (output): XLMRobertaOutput( (dense): Linear(in_features=4096, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (lm_head): XLMRobertaLMHead( (dense): Linear(in_features=1024, out_features=1024, bias=True) (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (decoder): Linear(in_features=1024, out_features=250002, bias=True) ) ) 06/04/2024 14:38:02 - INFO - main - Config: XLMRobertaConfig { "_name_or_path": "/home/lf/models/bge-reranker-v2-m3", "architectures": [ "XLMRobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "LABEL_0" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "LABEL_0": 0 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 8194, "model_type": "xlm-roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "output_past": true, "pad_token_id": 1, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.41.2", "type_vocab_size": 1, "use_cache": true, "vocab_size": 250002 }

06/04/2024 14:38:03 - WARNING - accelerate.utils.other - Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. 06/04/2024 14:38:04 - INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:2 to store for rank: 0 06/04/2024 14:38:04 - INFO - torch.distributed.distributed_c10d - Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 1 nodes. [2024-06-04 14:38:05,548] [WARNING] [lr_schedules.py:759:init] total_num_steps 1 is less than warmup_num_steps 1 0%| | 0/1 [00:00<?, ?it/s]/home/narwal/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2717: UserWarning: max_length is ignored when padding=True and there is no truncation strategy. To pad to max length, use padding='max_length'. warnings.warn( /home/narwal/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2717: UserWarning: max_length is ignored when padding=True and there is no truncation strategy. To pad to max length, use padding='max_length'. warnings.warn( 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.43s/it]tried to get lr value before scheduler/optimizer started stepping, returning lr=0 {'loss': 2.3477, 'learning_rate': 0, 'epoch': 1.0} {'train_runtime': 1.4829, 'train_samples_per_second': 6.744, 'train_steps_per_second': 0.674, 'train_loss': 2.34765625, 'epoch': 1.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.48s/it]

toy_finetune_data.jsonl计算排序准确率 微调前排序准确率为 100% 微调后排序准确率仅为10%

微调前样例分数 [['A man pulls two women down a city street in a rickshaw.', 'A man is in a city.'], ['A man pulls two women down a city street in a rickshaw.', 'A man is a pilot of an airplane.'], ['A man pulls two women down a city street in a rickshaw.', 'It is boring and mundane.'], ['A man pulls two women down a city street in a rickshaw.', 'The morning sunlight was shining brightly and it was warm. '], ['A man pulls two women down a city street in a rickshaw.', 'Two people jumped off the dock.'], ['A man pulls two women down a city street in a rickshaw.', 'People watching a spaceship launch.'], ['A man pulls two women down a city street in a rickshaw.', 'Mother Teresa is an easy choice.'], ['A man pulls two women down a city street in a rickshaw.', "It's worth being able to go at a pace you prefer."]] tensor([ 5.1926, -10.7530, -11.0055, -9.5263, -10.3903, -11.0282, -11.0232, -9.5248], device='cuda:0', grad_fn=)

微调后样例分数都变为差别不大的小数 [['A man pulls two women down a city street in a rickshaw.', 'A man is in a city.'], ['A man pulls two women down a city street in a rickshaw.', 'A man is a pilot of an airplane.'], ['A man pulls two women down a city street in a rickshaw.', 'It is boring and mundane.'], ['A man pulls two women down a city street in a rickshaw.', 'The morning sunlight was shining brightly and it was warm. '], ['A man pulls two women down a city street in a rickshaw.', 'Two people jumped off the dock.'], ['A man pulls two women down a city street in a rickshaw.', 'People watching a spaceship launch.'], ['A man pulls two women down a city street in a rickshaw.', 'Mother Teresa is an easy choice.'], ['A man pulls two women down a city street in a rickshaw.', "It's worth being able to go at a pace you prefer."]] tensor([0.3033, 0.3032, 0.3118, 0.2768, 0.2530, 0.3259, 0.2976, 0.2699], device='cuda:0', grad_fn=)

staoxiao commented 3 months ago

toy_finetune_data.jsonl is just toy data, only used to show the training data format. You need to replace it with your data.

moon-fall commented 3 months ago

toy_finetune_data.jsonl is just toy data, only used to show the training data format. You need to replace it with your data.

same for using my own data, I think there is something wrong.

moon-fall commented 3 months ago

753