pytorch / torchtune

A Native-PyTorch Library for LLM Fine-tuning
https://pytorch.org/torchtune/main/
BSD 3-Clause "New" or "Revised" License
3.63k stars 298 forks source link

how to support huawei Ascend npu device #1006

Open apachemycat opened 1 month ago

apachemycat commented 1 month ago

Very good project and I tried succeed on A100 card Now I try run it in huawei Ascend npu device https://github.com/Ascend/pytorch

import torch import torch_npu print("huawei Ascend npu runtime test") x = torch.randn(2, 2).npu() y = torch.randn(2, 2).npu() z = x.mm(y)

how can I modify torchtune code to test and run it on Ascend npu ? I can contribute this feature to torchtune

sys.exit(recipe_main())

File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torchtune/config/_parse.py", line 50, in wrapper sys.exit(recipe_main(conf)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/recipes/lora_finetune_single_device.py", line 503, in recipe_main recipe = LoRAFinetuneRecipeSingleDevice(cfg=cfg) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/recipes/lora_finetune_single_device.py", line 97, in init self._device = utils.get_device(device=cfg.device) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torchtune/utils/_device.py", line 115, in get_device device = torch.device(device) RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: npu

joecummings commented 1 month ago

Hi @apachemycat!

Currently, we do not support Huawei devices. If this is something that you think is important for your use case and the community as a whole, I encourage you to submit some code and we'd be happy to take a look!