Open kumare3 opened 1 week ago
To improve checkpointing performance in Flytekit for PyTorch, leveraging asynchronous checkpointing as described in the PyTorch blog is a viable approach. This method reduces the downtime for training due to checkpointing by moving the final checkpointing process off the critical path to CPU threads, allowing GPU training to continue.
Would you like more details on how to integrate this into Flytekit?
Sources:
This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not.
Mention @runllm to ask follow-up questions. See runllm.com for more!
Motivation: Why do you think this is important?
When using elastic we can greatly improve checkpointing performance using https://pytorch.org/blog/reducing-checkpointing-times/
Goal: What should the final outcome look like, ideally?
Checkpoints are faster
Describe alternatives you've considered
Na
Propose: Link/Inline OR Additional context
No response
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?