Closed parthsarthi03 closed 2 days ago
Note: Links to docs will display an error until the docs builds have been completed.
There are 1 currently active SEVs. If your PR is affected, please view them below:
As of commit cfd2eb4a3e0a9bb03fc3e71483822947eb6db5b5 with merge base 0c31907a20c6f031c9b891fe1968c7cc69742eeb (): :green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
thanks for the PR! I glanced over it and it looks great! I will review it more carefully tomorrow and merge it i dont find any issues :)
Consider refactoring (extracting into a separate file) because this same setup function is used in full_finetune_single_device.py
(https://github.com/pytorch/torchtune/blob/main/recipes/full_finetune_single_device.py#L496)
Eventually they'll fall out of sync.
cc: @felipemello1
(i've hit this same issue and was about to submit a PR but noticed this one :))
Also might be worthwhile adding something like:
lr_scheduler:
_component_: torchtune.training.lr_schedulers.get_cosine_schedule_with_warmup
num_warmup_steps: 10
to configs, e.g. for llama 3.1 (8b/70b) better than not being sure what scheduler is being used
@gordicaleksa , great point! We are currently having some internal discussions about what should be exposed in the recipe and what should be a utility. In general, we are ok with repeating code so it is easy for people to hack and make their changes. But there are use cases like this one that seems to be pretty standard and really don't add much value by being exposed. We will work on making our recipes a bit learner soon.
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses: #1308
Purpose of this PR:
This PR adds support for an optional learning rate scheduler to the
FullFinetuneRecipeDistributed
class, allowing users to configure and use a learning rate scheduler during training.You can enable it by adding the following to your config file:
Changelog
What are the changes made in this PR?
_setup_lr_scheduler
method to initialize the scheduler based on the configuration.setup
method to call_setup_lr_scheduler
after computingself._steps_per_epoch
andself.global_step
.train
method to step the scheduler after each optimizer step.Test plan
Tested on 4 GPUs with various configurations: https://wandb.ai/psarthi/torchtune_lr_scheduler_tests: