huggingface / accelerate

🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
https://huggingface.co/docs/accelerate
Apache License 2.0
7.32k stars 872 forks source link

add `require_triton` and enable `test_dynamo` work on xpu #2878

Open faaany opened 1 week ago

faaany commented 1 week ago

What does this PR do?

test_dynamo works on xpu, but fails because it requires extra installation of the triton library. This PR adds the require_triton test marker and make test_dynamo device-agnostic.

faaany commented 6 days ago

Hmm, I always thought that triton comes with torch. I just created a test env and ran pip install torch and indeed triton was installed. Under what circumstances can it be missing, is it dependent on the device?

yes, it depends on the device. For the pytorch cuda distribution, triton is installed by default. But for XPU and CPU, triton will not be installed by default (at least for now). So if we think about the pro&cons about adding this marker:

Pros:

Cons:

faaany commented 6 days ago

Just let me know how you decide; I will update my code accordingly. Thanks so much for the review! @BenjaminBossan @SunMarc

BenjaminBossan commented 5 days ago

Thanks for providing further information on the triton dependency. In that case, I agree that adding an explicit check is fine.

As to whether we should potentially break the tests for other non-CUDA devices: It's hard to say if this would be a good thing or not, as we don't know how that will affect the corresponding maintainers. Probably it's best for Zach to judge when he's back.

HuggingFaceDocBuilderDev commented 38 minutes ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.