huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.32k stars 872 forks source link

Drop torch re-imports in npu and mlu paths #2856

Closed dvrogozh closed 2 weeks ago

dvrogozh commented 2 weeks ago

That's a follow up to cleanup the code in npu and mlu paths. Noted here: https://github.com/huggingface/accelerate/pull/2825#discussion_r1638310778

import torch is actually done file-wise and is not needed: https://github.com/huggingface/accelerate/blob/3b5a00e048f4393398d8ea8c4f468857f595f039/src/accelerate/utils/imports.py#L21

CAVEAT: I don't have HW to verify npu and mlu paths with tests.

CC: @SunMarc, @muellerzr

HuggingFaceDocBuilderDev commented 2 weeks ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.