In this PR we have fixed multiple problems related to the resuming from checkpoint functionality. In particular:
when resuming from checkpoint the algo.learning_starts was read from the old config and if the buffer was not saved into the checkpoint then at the first training iteration the Ratio class, to keep with the replay-ratio after the algo.learning_starts steps, will output a huge number of per_rank_gradient_steps to be run on the first training iteration which could lead to OOM or to really slow training time
now the user must specify the algo.learning_starts even when resuming from a checkpoint. Set algo.learning_starts=0 to disable the buffer pre-fill
algo.learning_starts are not taken into consideration for the replay-ratio computation also when resuming from checkpoint
updated checkpoint howto
Type of Change
Please select the one relevant option below:
Bug fix (non-breaking change that solves an issue)
New feature (non-breaking change that adds functionality)
Breaking change (fix or feature that would cause existing functionality to not work as expected)
Documentation update
Other (please describe):
Checklist
Please confirm that the following tasks have been completed:
[x] I have tested my changes locally and they work as expected. (Please describe the tests you performed.)
[x] I have added unit tests for my changes, or updated existing tests if necessary.
[x] I have updated the documentation, if applicable.
[x] I have installed pre-commit and run locally for my code changes.
Screenshots or Visuals (Optional)
If applicable, please provide screenshots, diagrams, graphs, or videos of the changes, features or the error.
Additional Information (Optional)
Please provide any additional information that may be useful for the reviewer, such as:
Any potential risks or challenges associated with the changes.
Any instructions for testing or running the code.
Any other relevant information.
Thank you for your contribution! Once you have filled out this template, please ensure that you have assigned the appropriate reviewers and that all tests have passed.
Summary
In this PR we have fixed multiple problems related to the resuming from checkpoint functionality. In particular:
algo.learning_starts
was read from the old config and if the buffer was not saved into the checkpoint then at the first training iteration theRatio
class, to keep with the replay-ratio after thealgo.learning_starts
steps, will output a huge number of per_rank_gradient_steps to be run on the first training iteration which could lead to OOM or to really slow training timealgo.learning_starts
even when resuming from a checkpoint. Setalgo.learning_starts=0
to disable the buffer pre-fillalgo.learning_starts
are not taken into consideration for the replay-ratio computation also when resuming from checkpointType of Change
Please select the one relevant option below:
Checklist
Please confirm that the following tasks have been completed:
Screenshots or Visuals (Optional)
If applicable, please provide screenshots, diagrams, graphs, or videos of the changes, features or the error.
Additional Information (Optional)
Please provide any additional information that may be useful for the reviewer, such as:
Thank you for your contribution! Once you have filled out this template, please ensure that you have assigned the appropriate reviewers and that all tests have passed.