hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
31.25k stars 3.85k forks source link

windows下baichuan2训练异常 #3458

Closed WXLJZ closed 5 months ago

WXLJZ commented 5 months ago

Reminder

Reproduction

情况描述:我同样的代码,在Linux的v100 32G上微调过程正常,我现在更换到windows下的4090 24G微调,对batch调小了一些。在linux上微调至少也要1h10min,但在windows上不到2min就完成了。另外,最终做预测时,报错AssertionError: Provided path (../checkpoints/gnn_04-26-09) does not contain a LoRA weight. 我进checkpoints文件夹中看了看,里面只有几个json文件,没有任何bin文件。总而言之,应该是训练过程出现了什么异常,导致并未真正地开始微调模型。

train_args.add_argument("--do_train", action="store_true")
train_args.add_argument("--seed", type=int, default=42)
train_args.add_argument("--model_name_or_path", type=str, default="D:/zhd/models/Baichuan2-7B-Base-LLaMAfied")
train_args.add_argument("--template", type=str, default="default")
train_args.add_argument("--lora_target", type=str, default="all")
train_args.add_argument("--dataset", type=str)
train_args.add_argument("--dataset_dir", type=str, default="../data")
train_args.add_argument("--finetuning_type", type=str, default="lora")
train_args.add_argument("--lora_rank", type=int, default=64)
train_args.add_argument("--output_dir", type=str)
train_args.add_argument("--overwrite_output_dir", action="store_true")
train_args.add_argument("--overwrite_cache", action="store_true")
train_args.add_argument("--per_device_train_batch_size", type=int, default=4)
train_args.add_argument("--per_device_eval_batch_size", type=int, default=4)
train_args.add_argument("--gradient_accumulation_steps", type=int, default=2)
train_args.add_argument("--lr_scheduler_type", type=str, default="cosine")
train_args.add_argument("--evaluation_strategy", type=str, default="steps")
train_args.add_argument("--logging_steps", type=int, default=20)
train_args.add_argument("--save_steps", type=int, default=200)
train_args.add_argument("--save_total_limit", type=int, default=5)
train_args.add_argument("--val_size", type=float, default=0.1)
train_args.add_argument("--learning_rate", type=float, default=8e-5)
train_args.add_argument("--resume_lora_training", action="store_true")
train_args.add_argument("--num_train_epochs", type=float, default=3.0)
train_args.add_argument("--load_best_model_at_end", action="store_true")
train_args.add_argument("--fp16", action="store_true")
train_args.add_argument("--plot_loss", action="store_true")
train_args.add_argument("--ddp_find_unused_parameters", action="store_true")

下面贴上训练时的日志:

04/26/2024 09:09:45 - INFO - llmtuner.tuner.core.parser - Process rank: 0, device: cuda:0, n_gpu: 1
  distributed training: True, compute dtype: torch.float16
04/26/2024 09:09:45 - INFO - llmtuner.tuner.core.parser - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=False,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=20,
evaluation_strategy=steps,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_config=None,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=2,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=8e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=../checkpoints/gnn_04-26-09\runs\Apr26_09-09-45_DESKTOP-2N05PQQ,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=20,
logging_strategy=steps,
lr_scheduler_type=cosine,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_torch,
optim_args=None,
output_dir=../checkpoints/gnn_04-26-09,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=4,
per_device_train_batch_size=4,
predict_with_generate=False,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=../checkpoints/gnn_04-26-09,
save_on_each_node=False,
save_safetensors=False,
save_steps=200,
save_strategy=steps,
save_total_limit=5,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=False,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
04/26/2024 09:09:45 - INFO - llmtuner.dsets.loader - Loading dataset CSU_Instruction/gnn_inst_train.json...
04/26/2024 09:09:45 - WARNING - llmtuner.dsets.utils - Checksum failed: missing SHA-1 hash value in dataset_info.json.
Using custom data configuration default-0b29577f88f5cb4f
Loading Dataset Infos from G:\anaconda\envs\sce\lib\site-packages\datasets\packaged_modules\json
Generating dataset json (C:/Users/dell/.cache/huggingface/datasets/json/default-0b29577f88f5cb4f/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7)
Downloading and preparing dataset json/default to C:/Users/dell/.cache/huggingface/datasets/json/default-0b29577f88f5cb4f/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7...
Downloading took 0.0 min
Checksum Computation took 0.0 min
Generating train split
Generating train split: 4101 examples [00:00, 60510.52 examples/s]
Unable to verify splits sizes.
Dataset json downloaded and prepared to C:/Users/dell/.cache/huggingface/datasets/json/default-0b29577f88f5cb4f/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7. Subsequent calls will reuse this data.
04/26/2024 09:09:46 - WARNING - llmtuner.tuner.core.loader - Checkpoint is not found at evaluation, load the original model.
[INFO|tokenization_utils_base.py:1850] 2024-04-26 09:09:46,716 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:1850] 2024-04-26 09:09:46,717 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:1850] 2024-04-26 09:09:46,717 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1850] 2024-04-26 09:09:46,717 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1850] 2024-04-26 09:09:46,717 >> loading file tokenizer_config.json
[INFO|configuration_utils.py:713] 2024-04-26 09:09:47,827 >> loading configuration file D:/zhd/models/Baichuan2-7B-Base-LLaMAfied\config.json
[INFO|configuration_utils.py:775] 2024-04-26 09:09:47,829 >> Model config LlamaConfig {
  "_name_or_path": "D:/zhd/models/Baichuan2-7B-Base-LLaMAfied",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "bos_token_id": 1,
  "eos_token_id": 2,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 11008,
  "max_position_embeddings": 4096,
  "model_type": "llama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 32,
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 10000.0,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.33.0",
  "use_cache": true,
  "vocab_size": 125696
}

[INFO|modeling_utils.py:2854] 2024-04-26 09:09:47,868 >> loading weights file D:/zhd/models/Baichuan2-7B-Base-LLaMAfied\pytorch_model.bin.index.json
[INFO|modeling_utils.py:1200] 2024-04-26 09:09:47,869 >> Instantiating LlamaForCausalLM model under default dtype torch.float16.
[INFO|configuration_utils.py:768] 2024-04-26 09:09:47,870 >> Generate config GenerationConfig {
  "_from_model_config": true,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "transformers_version": "4.33.0"
}

Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:10<00:00,  5.50s/it]
[INFO|modeling_utils.py:3643] 2024-04-26 09:09:59,273 >> All model checkpoint weights were used when initializing LlamaForCausalLM.

[INFO|modeling_utils.py:3651] 2024-04-26 09:09:59,274 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at D:/zhd/models/Baichuan2-7B-Base-LLaMAfied.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|modeling_utils.py:3220] 2024-04-26 09:09:59,283 >> Generation config file not found, using a generation config created from the model config.
04/26/2024 09:10:04 - INFO - llmtuner.tuner.core.loader - trainable params: 0 || all params: 7505973248 || trainable%: 0.0000
04/26/2024 09:10:04 - INFO - llmtuner.extras.template - Add pad token: </s>
[INFO|tokenization_utils_base.py:926] 2024-04-26 09:10:04,266 >> Assigning [] to the additional_special_tokens key of the tokenizer
Filter:   0%|                                                                                                                         | 0/4101 [00:00<?, ? examples/s]Caching processed dataset at C:\Users\dell\.cache\huggingface\datasets\json\default-0b29577f88f5cb4f\0.0.0\c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7\cache-0cd665d489bee8b7.arrow
Filter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 4101/4101 [00:00<00:00, 57208.56 examples/s]
Running tokenizer on dataset:   0%|                                                                                                   | 0/4101 [00:00<?, ? examples/s]04/26/2024 09:10:05 - INFO - llmtuner.dsets.preprocess - The number of truncated instructions is 7
Caching processed dataset at C:\Users\dell\.cache\huggingface\datasets\json\default-0b29577f88f5cb4f\0.0.0\c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7\cache-d7fb7fc75ea5709f.arrow
Running tokenizer on dataset:  24%|████████████████████▉                                                                 | 1000/4101 [00:00<00:02, 1265.04 examples/s]04/26/2024 09:10:06 - INFO - llmtuner.dsets.preprocess - The number of truncated instructions is 14
Running tokenizer on dataset:  49%|█████████████████████████████████████████▉                                            | 2000/4101 [00:01<00:01, 1331.08 examples/s]04/26/2024 09:10:06 - INFO - llmtuner.dsets.preprocess - The number of truncated instructions is 14
Running tokenizer on dataset:  73%|██████████████████████████████████████████████████████████████▉                       | 3000/4101 [00:02<00:00, 1359.67 examples/s]04/26/2024 09:10:07 - INFO - llmtuner.dsets.preprocess - The number of truncated instructions is 8
Running tokenizer on dataset:  98%|███████████████████████████████████████████████████████████████████████████████████▉  | 4000/4101 [00:02<00:00, 1371.92 examples/s]04/26/2024 09:10:07 - INFO - llmtuner.dsets.preprocess - The number of truncated instructions is 0
Running tokenizer on dataset: 100%|██████████████████████████████████████████████████████████████████████████████████████| 4101/4101 [00:03<00:00, 1351.92 examples/s] 
这个地方有一些我自定义的打印信息(已删除)
[INFO|training_args.py:1332] 2024-04-26 09:10:09,288 >> Found safetensors installation, but --save_safetensors=False. Safetensors should be a preferred weights saving format due to security and performance reasons. If your model cannot be saved by safetensors please feel free to open an issue at https://github.com/huggingface/safetensors!
[INFO|training_args.py:1764] 2024-04-26 09:10:09,289 >> PyTorch: setting up devices
[INFO|trainer.py:3115] 2024-04-26 09:10:15,439 >> ***** Running Evaluation *****
[INFO|trainer.py:3117] 2024-04-26 09:10:15,440 >>   Num examples = 4101
[INFO|trainer.py:3120] 2024-04-26 09:10:15,440 >>   Batch size = 4
[WARNING|logging.py:290] 2024-04-26 09:10:15,451 >> You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1026/1026 [03:18<00:00,  5.16it/s]
***** eval metrics *****
  eval_loss               =     2.5301
  eval_runtime            = 0:03:19.12
  eval_samples_per_second =     20.595
  eval_steps_per_second   =      5.152

Expected behavior

希望能够像在Linux上面一样地正常运行,现在不知道是出现了什么情况!请各位大佬帮我解决这个问题!谢谢!

下面另外贴上我在windows下安装地依赖情况:

accelerate                0.29.3                   pypi_0    pypi
aiofiles                  23.2.1                   pypi_0    pypi
aiohttp                   3.9.5                    pypi_0    pypi
aiosignal                 1.3.1                    pypi_0    pypi
altair                    5.3.0                    pypi_0    pypi
anyio                     4.3.0                    pypi_0    pypi
async-timeout             4.0.3                    pypi_0    pypi
attrs                     23.2.0                   pypi_0    pypi
blas                      1.0                         mkl    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates           2024.3.11            haa95532_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi                   2024.2.2                 pypi_0    pypi
charset-normalizer        3.3.2                    pypi_0    pypi
click                     8.1.7                    pypi_0    pypi
colorama                  0.4.6                    pypi_0    pypi
contourpy                 1.2.1                    pypi_0    pypi
cudatoolkit               11.8.0               hd77b12b_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cycler                    0.12.1                   pypi_0    pypi
datasets                  2.19.0                   pypi_0    pypi
dill                      0.3.8                    pypi_0    pypi
docstring-parser          0.16                     pypi_0    pypi
emoji                     2.11.1                   pypi_0    pypi
eval-type-backport        0.2.0                    pypi_0    pypi
exceptiongroup            1.2.1                    pypi_0    pypi
faiss                     1.7.4           py39cuda112h5889e4e_0_cuda    conda-forge
faiss-gpu                 1.7.4                h949689a_0    conda-forge
fastapi                   0.95.1                   pypi_0    pypi
ffmpy                     0.3.2                    pypi_0    pypi
filelock                  3.13.4                   pypi_0    pypi
fonttools                 4.51.0                   pypi_0    pypi
frozenlist                1.4.1                    pypi_0    pypi
fsspec                    2024.3.1                 pypi_0    pypi
gradio                    3.50.2                   pypi_0    pypi
gradio-client             0.6.1                    pypi_0    pypi
h11                       0.14.0                   pypi_0    pypi
httpcore                  1.0.5                    pypi_0    pypi
httpx                     0.27.0                   pypi_0    pypi
huggingface-hub           0.22.2                   pypi_0    pypi
idna                      3.7                      pypi_0    pypi
importlib-resources       6.4.0                    pypi_0    pypi
intel-openmp              2023.1.0         h59b6b97_46320    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jieba                     0.42.1                   pypi_0    pypi
jinja2                    3.1.3                    pypi_0    pypi
joblib                    1.4.0                    pypi_0    pypi
jsonschema                4.21.1                   pypi_0    pypi
jsonschema-specifications 2023.12.1                pypi_0    pypi
kiwisolver                1.4.5                    pypi_0    pypi
libblas                   3.9.0              20_win64_mkl    conda-forge
libfaiss                  1.7.4           cuda112h33bf9e0_0_cuda    conda-forge
libfaiss-avx2             1.7.4           cuda112h1234567_0_cuda    conda-forge
liblapack                 3.9.0              20_win64_mkl    conda-forge
markdown-it-py            3.0.0                    pypi_0    pypi
markupsafe                2.1.5                    pypi_0    pypi
matplotlib                3.8.4                    pypi_0    pypi
mdurl                     0.1.2                    pypi_0    pypi
mkl                       2023.2.0         h6a75c08_50497    conda-forge
mkl-service               2.4.0            py39h2bbff1b_1    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_fft                   1.3.8            py39h2bbff1b_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mkl_random                1.2.4            py39h59b6b97_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mpmath                    1.3.0                    pypi_0    pypi
multidict                 6.0.5                    pypi_0    pypi
multiprocess              0.70.16                  pypi_0    pypi
networkx                  3.2.1                    pypi_0    pypi
nltk                      3.8.1                    pypi_0    pypi
numpy                     1.26.4           py39h055cbcc_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy-base                1.26.4           py39h65a83cf_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openssl                   3.0.13               h2bbff1b_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
orjson                    3.10.1                   pypi_0    pypi
packaging                 24.0                     pypi_0    pypi
pandas                    2.2.2                    pypi_0    pypi
peft                      0.4.0                    pypi_0    pypi
pillow                    10.3.0                   pypi_0    pypi
pip                       23.3.1           py39haa95532_0    defaults
protobuf                  5.26.1                   pypi_0    pypi
psutil                    5.9.8                    pypi_0    pypi
pyarrow                   16.0.0                   pypi_0    pypi
pyarrow-hotfix            0.6                      pypi_0    pypi
pydantic                  1.10.11                  pypi_0    pypi
pydub                     0.25.1                   pypi_0    pypi
pygments                  2.17.2                   pypi_0    pypi
pyparsing                 3.1.2                    pypi_0    pypi
python                    3.9.19               h1aa4202_0    defaults
python-dateutil           2.9.0.post0              pypi_0    pypi
python-multipart          0.0.9                    pypi_0    pypi
python_abi                3.9                      2_cp39    conda-forge
pytz                      2024.1                   pypi_0    pypi
pyyaml                    6.0.1                    pypi_0    pypi
referencing               0.34.0                   pypi_0    pypi
regex                     2024.4.16                pypi_0    pypi
requests                  2.31.0                   pypi_0    pypi
rich                      13.7.1                   pypi_0    pypi
rouge-chinese             1.0.3                    pypi_0    pypi
rpds-py                   0.18.0                   pypi_0    pypi
safetensors               0.4.3                    pypi_0    pypi
scipy                     1.13.0                   pypi_0    pypi
semantic-version          2.10.0                   pypi_0    pypi
sentencepiece             0.2.0                    pypi_0    pypi
setuptools                68.2.2           py39haa95532_0    defaults
shtab                     1.7.1                    pypi_0    pypi
six                       1.16.0                   pypi_0    pypi
sniffio                   1.3.1                    pypi_0    pypi
sqlite                    3.41.2               h2bbff1b_0    defaults
sse-starlette             2.1.0                    pypi_0    pypi
stanza                    1.4.0                    pypi_0    pypi
starlette                 0.26.1                   pypi_0    pypi
sympy                     1.12                     pypi_0    pypi
tbb                       2021.8.0             h59b6b97_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tiktoken                  0.6.0                    pypi_0    pypi
tokenizers                0.13.3                   pypi_0    pypi
toolz                     0.12.1                   pypi_0    pypi
torch                     2.0.0+cu118              pypi_0    pypi
torchaudio                2.0.1+cu118              pypi_0    pypi
torchvision               0.15.1+cu118             pypi_0    pypi
tqdm                      4.66.2                   pypi_0    pypi
transformers              4.33.0                   pypi_0    pypi
trl                       0.7.2                    pypi_0    pypi
typing-extensions         4.11.0                   pypi_0    pypi
tyro                      0.8.3                    pypi_0    pypi
tzdata                    2024.1                   pypi_0    pypi
ucrt                      10.0.20348.0         haa95532_0    https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
urllib3                   2.2.1                    pypi_0    pypi
uvicorn                   0.29.0                   pypi_0    pypi
vc                        14.2                 h21ff451_1    defaults
vc14_runtime              14.38.33130         h82b7239_18    conda-forge
vs2015_runtime            14.38.33130         hcb4865c_18    conda-forge
websockets                11.0.3                   pypi_0    pypi
wheel                     0.41.2           py39haa95532_0    defaults
xxhash                    3.4.1                    pypi_0    pypi
yarl                      1.9.4                    pypi_0    pypi
zipp                      3.18.1                   pypi_0    pypi

System Info

No response

Others

No response

WXLJZ commented 5 months ago

对了,有个情况忘了说,从日志上看,在微调的时候,我是设置了验证集参数的,在Linux上微调的时候会做验证,但在windows上并没有做验证。

hiyouga commented 5 months ago

你参数不对 自己检查下