ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

3090 cudamemory 不够用25GB显存 #807

Closed illumionous closed 1 year ago

illumionous commented 1 year ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

LLaMA-13B

操作系统

Linux

详细描述问题

r=3e-4
lora_rank=8
lora_alpha=16
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
#lora_trainable="q_proj,v_proj"
#modules_to_save="embed_tokens,lm_head"
modules_to_save=""
lora_dropout=0.05

pretrained_model=/home/wilsontam/pyllama_data/13B_hf
tokenizer_path=/home/wilsontam/pyllama_data/13B_hf

dataset_dir=data.instruct2/train
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=300
gradient_accumulation_steps=16 # 1 GPU, 32 batch per GPU = 128 batch for 1step update
output_dir=outs/13B_instruct_8-1
#peft_model=path/to/peft/model/dir
validation_file=data.instruct2/val/8-1_val.json
force_resize_embeddings=False

当我用这个脚本去精调指令时,发生memoryout,首先是词袋是The vocab size of the tokenizer must be 49954, but found49953,我修改代码定义为49953,然后又发生memory不够

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况

依赖完全依照教程

运行日志或截图

rainable params: 31293440 || all params: 13231006720 || trainable%: 0.23651594064038084
08/02/2023 12:34:08 - INFO - __main__ - model.modules_to_save: None
Traceback (most recent call last):
  File "/home/nlp/lama/Chinese-LLaMA-Alpaca/./scripts/training/run_clm_sft_with_peft.py", line 441, in <module>
    main()
  File "/home/nlp/lama/Chinese-LLaMA-Alpaca/./scripts/training/run_clm_sft_with_peft.py", line 397, in main
    trainer = Trainer(
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/transformers/trainer.py", line 499, in __init__
    self._move_model_to_device(model, args.device)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/transformers/trainer.py", line 741, in _move_model_to_device
    model = model.to(device)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
    return self._apply(convert)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 4 more times]
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB (GPU 0; 23.70 GiB total capacity; 22.79 GiB already allocated; 130.00 MiB free; 22.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3411336) of binary: /home/nlp/.conda/envs/finetune/bin/python
Traceback (most recent call last):
  File "/home/nlp/.conda/envs/finetune/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/nlp/.conda/envs/finetune/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
./scripts/training/run_clm_sft_with_peft.py FAILED
bigcash commented 1 year ago

13B应该不止需要25G显存吧,用zero3训练吧

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 1 year ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.