which47 / LLMCL

Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning
22 stars 5 forks source link

爆显存咨询 #2

Open lin-rany opened 4 months ago

lin-rany commented 4 months ago

想知道这个微调大概需要多少显存呢,我用了6张4090,但还是爆显存了,能帮忙看下问题吗。我的运行脚本长以下这样: data_path='./data_files' model_name_or_path='/data2/hugo/lin_rany/model/Meta-Llama-3-8B-Instruct' export NCCL_P2P_DISABLE=1 export NCCL_IB_DISABLE=1 deepspeed --include localhost:2,3,4,5,6,7 main.py \ --model_name_or_path ${model_name_or_path} \ --output_dir "./outputs/models/seq" \ --dataset_name "medmcqa" \ --per_device_train_batch_size 1 \ --adapter lora \ --lora_r 2 \ --data_path ${data_path}

unset NCCL_P2P_DISABLE unset NCCL_IB_DISABLE

lin-rany commented 4 months ago

使用的是Python 3.10.14

which47 commented 4 months ago

感谢您对本工作的关注!当您使用deepspeed进行训练时,建议您指定deepspeed_config.json文件进行训练,您还可以尝试安装huggingface accelerate库,在终端使用accelerate-config命令,使用合适您的配置项并以accelerate-launch而不是deepspeed启动您的脚本,另外我们计划在最近对项目进行一次更新,敬请期待