datawhalechina / self-llm

《开源大模型食用指南》基于Linux环境快速部署开源大模型,更适合中国宝宝的部署教程
Apache License 2.0
8.24k stars 985 forks source link

peft微调llama3 8b,从第10补开始loss一直都是0 #124

Open ykallan opened 4 months ago

ykallan commented 4 months ago

问题描述:

使用peft微调llama3 8b,训练代码基本是按照样例稍作修改,在训练的时候 前10个steps,loss稍高,后面输出的loss,一直都是0.0了

微调代码:


import torch

from datasets import Dataset
import pandas as pd
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer

from peft import LoraConfig, TaskType, get_peft_model

# 将JSON文件转换为CSV文件
# json_path = r"D:/codes/llm_about/self-llm/dataset/huanhuan.json"
json_path = r"D:/codes/llm_about/self-llm/zzzzz_train/Qwen15B05B/short_name_10k.json"
df = pd.read_json(json_path)
ds = Dataset.from_pandas(df)

pretrained_model = "D:/codes/nlp_about/pretrained_model/hfl_llama-3-chinese-8b"

tokenizer = AutoTokenizer.from_pretrained(pretrained_model, use_fast=False,
                                          trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token

print("tokenizer about: ", tokenizer.pad_token, tokenizer.pad_token_id, tokenizer.eos_token_id)

def process_func(example):
    MAX_LENGTH = 256  # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性
    input_ids, attention_mask, labels = [], [], []
    instruction = tokenizer(
        f"<|start_header_id|>user<|end_header_id|>\n\n{example['instruction'] + ':' + example['input']}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
        add_special_tokens=False)  # add_special_tokens 不在开头加 special_tokens
    response = tokenizer(f"{example['output']}<|eot_id|>", add_special_tokens=False)
    input_ids = instruction["input_ids"] + response["input_ids"] + [tokenizer.pad_token_id]
    attention_mask = instruction["attention_mask"] + response["attention_mask"] + [1]  # 因为eos token咱们也是要关注的所以 补充为1
    labels = [-100] * len(instruction["input_ids"]) + response["input_ids"] + [tokenizer.pad_token_id]
    if len(input_ids) > MAX_LENGTH:  # 做一个截断
        input_ids = input_ids[:MAX_LENGTH]
        attention_mask = attention_mask[:MAX_LENGTH]
        labels = labels[:MAX_LENGTH]
    return {
        "input_ids": input_ids,
        "attention_mask": attention_mask,
        "labels": labels
    }

tokenized_id = ds.map(process_func, remove_columns=ds.column_names)

model = AutoModelForCausalLM.from_pretrained(pretrained_model,
                                             device_map="auto", torch_dtype=torch.bfloat16)

model.enable_input_require_grads()  # 开启梯度检查点时,要执行该方法

print("model.dtype=", model.dtype)

config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    # target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    target_modules=["q_proj", "k_proj", "v_proj"],
    inference_mode=False,  # 训练模式
    r=8,  # Lora 秩
    lora_alpha=32,  # Lora alaph,具体作用参见 Lora 原理
    lora_dropout=0.1  # Dropout 比例
)

model = get_peft_model(model, config)
model.print_trainable_parameters()
model.enable_input_require_grads()

args = TrainingArguments(
    output_dir="./output/llama3",
    per_device_train_batch_size=4,
    gradient_accumulation_steps=4,
    logging_steps=10,
    num_train_epochs=16,
    save_steps=300,
    learning_rate=1e-4,
    save_on_each_node=True,
    gradient_checkpointing=True,
    # fp16=True,
)

trainer = Trainer(
    model=model,
    args=args,
    train_dataset=tokenized_id,
    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True),
)

trainer.train()

包版本:

peft==0.11.1
torch==2.3.0
transformers==4.37.2

日志输出:

tokenizer about:  <|end_of_text|> 128001 128001
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Map: 100%|██████████| 10000/10000 [00:03<00:00, 3302.55 examples/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:08<00:00,  2.12s/it]
model.dtype= torch.bfloat16
trainable params: 4,718,592 || all params: 8,034,979,840 || trainable%: 0.0587
C:\ProgramData\miniconda3\envs\llama\lib\site-packages\accelerate\accelerator.py:446: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead: 
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
  warnings.warn(
  0%|          | 0/10000 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
C:\ProgramData\miniconda3\envs\llama\lib\site-packages\torch\utils\checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
C:\ProgramData\miniconda3\envs\llama\lib\site-packages\transformers\models\llama\modeling_llama.py:728: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
  attn_output = torch.nn.functional.scaled_dot_product_attention(
  0%|          | 10/10000 [00:15<4:08:31,  1.49s/it]{'loss': 609.5891, 'learning_rate': 9.99e-05, 'epoch': 0.02}
  0%|          | 20/10000 [00:30<4:18:18,  1.55s/it]{'loss': 0.0, 'learning_rate': 9.98e-05, 'epoch': 0.03}
  0%|          | 30/10000 [00:45<4:18:13,  1.55s/it]{'loss': 0.0, 'learning_rate': 9.970000000000001e-05, 'epoch': 0.05}
  0%|          | 40/10000 [01:00<3:57:51,  1.43s/it]{'loss': 0.0, 'learning_rate': 9.960000000000001e-05, 'epoch': 0.06}
ykallan commented 4 months ago

后面把微调的参数调整一下:


args = TrainingArguments(
    output_dir="./output/llama3",
    per_device_train_batch_size=4,
    gradient_accumulation_steps=4,
    logging_steps=10,
    num_train_epochs=16,
    save_steps=300,
    learning_rate=1e-4,
    save_on_each_node=True,
    gradient_checkpointing=True,
    fp16=True,   # 放开这里
)

trainer = Trainer(
    model=model,
    args=args,
    train_dataset=tokenized_id,
    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True),
)

trainer.train()

在训练的时候会报错:

C:\ProgramData\miniconda3\envs\llama\lib\site-packages\accelerate\accelerator.py:446: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead: 
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False)
  warnings.warn(
  0%|          | 0/10000 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
C:\ProgramData\miniconda3\envs\llama\lib\site-packages\torch\utils\checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
C:\ProgramData\miniconda3\envs\llama\lib\site-packages\transformers\models\llama\modeling_llama.py:728: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
  attn_output = torch.nn.functional.scaled_dot_product_attention(
Traceback (most recent call last):
  File "D:\codes\llm_about\self-llm\zzzzz_train\llama38B\finetune_llama3_8b.py", line 95, in <module>
    trainer.train()
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\transformers\trainer.py", line 1539, in train
    return inner_training_loop(
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\transformers\trainer.py", line 1911, in _inner_training_loop
    self.accelerator.clip_grad_norm_(
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\accelerate\accelerator.py", line 2269, in clip_grad_norm_
    self.unscale_gradients()
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\accelerate\accelerator.py", line 2219, in unscale_gradients
    self.scaler.unscale_(opt)
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\torch\amp\grad_scaler.py", line 337, in unscale_
    optimizer_state["found_inf_per_device"] = self._unscale_grads_(
  File "C:\ProgramData\miniconda3\envs\llama\lib\site-packages\torch\amp\grad_scaler.py", line 278, in _unscale_grads_
    torch._amp_foreach_non_finite_check_and_unscale_(
RuntimeError: "_amp_foreach_non_finite_check_and_unscale_cuda" not implemented for 'BFloat16'
  0%|          | 0/10000 [00:02<?, ?it/s]
ykallan commented 4 months ago

显卡使用的是3090,cuda和cudnn更新到最新版12.1

nvcc -V:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:36:15_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0

nvidia-smi:

Sun May 26 00:17:50 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.85                 Driver Version: 555.85         CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090      WDDM  |   00000000:01:00.0  On |                  N/A |
| 50%   38C    P8             19W /  350W |    1076MiB /  24576MiB |      9%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
KMnO4-zx commented 4 months ago

loss为0,可能是peft和transformers包不匹配的问题,可以尝试都升级到最新版本。最后这个报错,可能是3090不支持bf16训练,但支持bf16加载模型,也有可能是windows的锅。

如果上述尝试都不成功的话,可以试试在autodl使用教程里面提供的镜像,跑一下看看。

windows上面出现问题,很难定位的问题~

Roger-G commented 4 months ago

我也遇到了这个问题,用的是qw7那个示例,v100,cuda11.8,centos系统

Charonal commented 4 months ago

我也遇到了这个问题,用的deepseek的示例,10步以后的loss都为0,微调后跑推理输出全是:"!!!!!!!!!!!!!!!!!!!!!!!!"

x6p2n9q8a4 commented 3 months ago

tokenizer.pad_token = tokenizer.eos_token 为啥需要这样啊