This PR makes it possible to load an optimizer checkpoint without automatically moving the optimizer's state to the GPU.
Some background as to why: I'm keeping the optimizer's state on the CPU to save on VRAM and I manually move it to the GPU as needed. Unfortunately the load_state_dict will move all of the optimizer's tensors to whatever device the model's parameters are currently on, which results in an OOM crash. So currently before loading an optimizer checkpoint I have to unnecessarily move my model to the CPU, call the optimizer's load_state_dict, and then move the model back to the GPU. With this PR I can skip this silly dance.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
This PR makes it possible to load an optimizer checkpoint without automatically moving the optimizer's state to the GPU.
Some background as to why: I'm keeping the optimizer's state on the CPU to save on VRAM and I manually move it to the GPU as needed. Unfortunately the
load_state_dict
will move all of the optimizer's tensors to whatever device the model's parameters are currently on, which results in an OOM crash. So currently before loading an optimizer checkpoint I have to unnecessarily move my model to the CPU, call the optimizer'sload_state_dict
, and then move the model back to the GPU. With this PR I can skip this silly dance.