ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

训练结束sft_lora_model只有几百KB,CheckPoint中adapter_model.bin也是空的 #389

Closed greatiliad closed 9 months ago

greatiliad commented 10 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Chinese-Alpaca-2 (7B/13B)

操作系统

Linux

详细描述问题

训练结束sft_lora_model只有几百KB CheckPoint中adapter_model.bin也是空的 这是CheckPoint中的文件情况

/opt/notebooks/llm/aplaca_llama2/lora/output/checkpoint-1800
# du -sh *
4.0K    README.md
4.0K    adapter_config.json
4.0K    adapter_model.bin
15G     global_step1800
4.0K    latest
16K     rng_state.pth
852K    sft_lora_model
4.0K    special_tokens_map.json
828K    tokenizer.model
4.0K    tokenizer_config.json
28K     trainer_state.json
8.0K    training_args.bin
24K     zero_to_fp32.py
# 运行脚本前请仔细阅读wiki(https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/sft_scripts_zh)
# Read the wiki(https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/sft_scripts_zh) carefully before running the script
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
#path/to/hf/llama-2/or/chinese-llama-2/dir/or/model_id
pretrained_model=/models/chinese-alpaca-2-7b
#path/to/chinese-llama-2/tokenizer/dir
chinese_tokenizer_path=/models/chinese-alpaca-2-7b
#path/to/sft/data/dir
dataset_dir=/opt/notebooks/llm/aplaca_llama2/lora/dataset
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=1
max_seq_length=512
output_dir=./output
validation_file=/opt/notebooks/llm/aplaca_llama2/lora/eval/train.json
RANDOM=1
deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --deepspeed ${deepspeed_config_file} \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --fp16 \
    --num_train_epochs 10 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 100 \
    --save_steps 200 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length ${max_seq_length} \
    --output_dir ${output_dir} \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --lora_dropout ${lora_dropout} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --load_in_kbits 16 \
    --gradient_checkpointing \
    --ddp_find_unused_parameters False
#    --modules_to_save ${modules_to_save} \

依赖情况(代码类问题务必提供)

bitsandbytes              0.41.1
peft                      0.6.0.dev0
sentence-transformers     2.2.2
sentencepiece             0.1.99
torch                     2.0.1+cu118
torchaudio                2.0.2+cu118
torchvision               0.15.2+cu118
transformers              4.31.0

运行日志或截图

[INFO|tokenization_utils_base.py:2210] 2023-11-03 13:08:59,892 >> tokenizer config file saved in ./output/sft_lora_model/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-11-03 13:08:59,892 >> Special tokens file saved in ./output/sft_lora_model/special_tokens_map.json
***** train metrics *****
  epoch                    =       10.0
  train_loss               =     0.3653
  train_runtime            = 0:16:15.89
  train_samples            =        188
  train_samples_per_second =      1.926
  train_steps_per_second   =      1.926
11/03/2023 13:08:59 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:3081] 2023-11-03 13:08:59,899 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-11-03 13:08:59,899 >>   Num examples = 11
[INFO|trainer.py:3086] 2023-11-03 13:08:59,899 >>   Batch size = 1
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 19.46it/s]
***** eval metrics *****
  epoch                   =       10.0
  eval_loss               =     0.0107
  eval_runtime            = 0:00:00.62
  eval_samples            =         11
  eval_samples_per_second =     17.553
  eval_steps_per_second   =     17.553
  perplexity              =     1.0107
iMountTai commented 10 months ago

deepspeed使用的是zero3策略?

greatiliad commented 10 months ago

似乎是epoc设置的太少了,如果我epoc设置100,去掉modules_to_save ,结果大约有300MB,如果加上modules_to_save,结果大约1.2GB。这个量是不是算正常的?

iMountTai commented 10 months ago

正常

greatiliad commented 9 months ago

一般做LORA, --num_train_epochs 这个参数设置多少为宜?

iMountTai commented 9 months ago

与数据量、学习率等有关,还需调优,没有明确的设置。

github-actions[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 9 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.