jiaweizzhao / GaLore

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Apache License 2.0
1.24k stars 131 forks source link

support sft? #1

Open NickyDark1 opened 4 months ago

NickyDark1 commented 4 months ago

same title

hiyouga commented 4 months ago

You can use GaLore for supervised fine-tuning in LLaMA Factory: https://github.com/hiyouga/LLaMA-Factory/blob/main/examples/extras/galore/galore_adamw.sh

Zou-njust commented 4 months ago

You can use GaLore for supervised fine-tuning in LLaMA Factory: https://github.com/hiyouga/LLaMA-Factory/blob/main/examples/extras/galore/galore_adamw.sh

你好,为什么用这个仓库的脚本在单卡3090上微调显存不够了,脚本如下:

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \ --stage sft \ --do_train True \ --model_name_or_path /home/chatglm2-6b/chatglm2-6b_models \ --finetuning_type full \ --template default \ --dataset_dir data \ --dataset text_to_cypher_train \ --cutoff_len 1024 \ --learning_rate 5e-05 \ --num_train_epochs 10.0 \ --max_samples 100000 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 8 \ --lr_scheduler_type cosine \ --max_grad_norm 1.0 \ --logging_steps 5 \ --save_steps 5000 \ --warmup_steps 0 \ --optim adamw_8bit \ --use_galore True \ --output_dir saves/chatglm2/galore/train_chatglm2_galore_2024-03-09-06-33-42 \ --fp16 True \ --galore_rank 16 \ --galore_update_interval 200 \ --galore_scale 0.25 \ --galore_target mlp,attn \ --plot_loss True

看到LLaMA Factory中给出了7B模型需要28GB,说明了“We report the GaLore results without per-layer weight updates.”。我在启动项中设置了--optim adamw_8bit,是不是和galore_adamw8bit_per_layer不一样

hiyouga commented 4 months ago

@Zou-njust We have supported per-layer weight update: https://github.com/hiyouga/LLaMA-Factory/blob/main/examples/extras/galore/galore_adamw_8bit_bf16.sh