huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.98k stars 974 forks source link

Fix `align_module_device`, ensure only cpu tensors for `get_state_dict_offloaded_model` #3217

Closed kylesayrs closed 2 weeks ago

kylesayrs commented 3 weeks ago

Background

tests/test_modeling_utils.py:808
    state_dict = get_state_dict_offloaded_model(model)
src/accelerate/utils/modeling.py:1532: in get_state_dict_offloaded_model                                                                                                             
    with align_module_device(module, "cpu"): 
/usr/lib/python3.10/contextlib.py:135: in __enter__                                                                                                                                  
    return next(self.gen)  
src/accelerate/utils/modeling.py:1929: in align_module_device                                                                                                                        
    set_module_tensor_to_device(module, name, execution_device)
ValueError: weight is on the meta device, we need a `value` to put in on cpu.

Purpose

Changes

Testing

test_e2e.py ```python3 from transformers import AutoModelForCausalLM from accelerate import cpu_offload from accelerate.utils.modeling import get_state_dict_offloaded_model model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B-Instruct") cpu_offload(model) state_dict = get_state_dict_offloaded_model(model) ```
HuggingFaceDocBuilderDev commented 3 weeks ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.