A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases, and capable of utilizing different hardware options with no code changes required. Docs: https://pytorch-accelerated.readthedocs.io/en/latest/
The process decorators were crashing when you call a decorated function from a process that does not involve a multi GPU configuration (e.g. when unit testing a decorated function). This is due to some changes in accelerated.state.AccelerateState that now include a backend attribute that is only there for the aforementioned configuration. With these changes, they also introduced a PartialState that works as the previous situation. By using that, we solve the problem.
The process decorators were crashing when you call a decorated function from a process that does not involve a multi GPU configuration (e.g. when unit testing a decorated function). This is due to some changes in
accelerated.state.AccelerateState
that now include abackend
attribute that is only there for the aforementioned configuration. With these changes, they also introduced aPartialState
that works as the previous situation. By using that, we solve the problem.