Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
349 stars 19 forks source link

学习率一直是1e-4不会下降? #4

Closed Minami-su closed 10 months ago

Minami-su commented 10 months ago

run: task: image_text_pretrain

optimizer

lr_sched: "linear_warmup_cosine_lr" init_lr: 1e-4 min_lr: 1e-6 warmup_lr: 0 warmup_steps: 500 weight_decay: 0.05 grad_norm_clip: 1. max_epoch: 1 #5 batch_size_train: 1 #16 batch_size_eval: 1 num_workers: 4 accum_grad_iters: 16 #1 image

请问为什么我的学习率在到了500step后就一直是1e-4不会下降?

Coobiw commented 10 months ago

第0个epoch不会启动cosine,后面的epoch才会按step进行cosine

---- 回复的原邮件 ---- 发件人 @.> 发送日期 2023年11月11日 17:00 收件人 Coobiw/MiniGPT4Qwen @.> 抄送人 Subscribed @.***> 主题 [Coobiw/MiniGPT4Qwen] 学习率一直是1e-4不会下降? (Issue #4)

run: task: image_text_pretrain optimizerlr_sched: "linear_warmup_cosine_lr" init_lr: 1e-4 min_lr: 1e-6 warmup_lr: 0 warmup_steps: 500 weight_decay: 0.05 grad_norm_clip: 1. max_epoch: 1 #5 batch_size_train: 1 #16 batch_size_eval: 1 num_workers: 4 accum_grad_iters: 16 #1

请问为什么我的学习率在到了500step后就一直是1e-4不会下降? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Minami-su commented 10 months ago

是否可以修改成一个epcoh就按照step,或者说跳过第0个epoch?

Coobiw commented 10 months ago

你可以在lavis/common/optims.py里修改,自己定义一下,里面基础组件实现的比较清晰,很方便的

---- 回复的原邮件 ---- 发件人 @.> 发送日期 2023年11月12日 13:22 收件人 Coobiw/MiniGPT4Qwen @.> 抄送人 Coobiw @.>, Comment @.> 主题 Re: [Coobiw/MiniGPT4Qwen] 学习率一直是1e-4不会下降? (Issue #4)

是否可以修改成一个epcoh就按照step,或者说跳过第0个epoch? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>