huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.32k stars 872 forks source link

Fix get_backend bug and add clear_device_cache function #2857

Open NurmaU opened 2 weeks ago

NurmaU commented 2 weeks ago

What does this PR do?

This PR introduces 2 updates:

  1. adds new clear_device_cache function in memory.py to replace repetitive code. This also adds check for mps devices in function find_executable_batch_size. This feature was tested with tests/test_memory_utils.py, which had bug.
  2. fixes bug in get_backend function where function's 3rd outputs must be callable object. This bug caused failures in tests/test_memory_utils.py on MacOS.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

HuggingFaceDocBuilderDev commented 2 weeks ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.