bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.34k stars 636 forks source link

Add `move_to_device` kwarg to the optimizer's `load_state_dict` #1344

Closed koute closed 2 months ago

koute commented 2 months ago

This PR makes it possible to load an optimizer checkpoint without automatically moving the optimizer's state to the GPU.

Some background as to why: I'm keeping the optimizer's state on the CPU to save on VRAM and I manually move it to the GPU as needed. Unfortunately the load_state_dict will move all of the optimizer's tensors to whatever device the model's parameters are currently on, which results in an OOM crash. So currently before loading an optimizer checkpoint I have to unnecessarily move my model to the CPU, call the optimizer's load_state_dict, and then move the model back to the GPU. With this PR I can skip this silly dance.

github-actions[bot] commented 2 months ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.