Open apachemycat opened 1 month ago
Hi @apachemycat!
Currently, we do not support Huawei devices. If this is something that you think is important for your use case and the community as a whole, I encourage you to submit some code and we'd be happy to take a look!
Very good project and I tried succeed on A100 card Now I try run it in huawei Ascend npu device https://github.com/Ascend/pytorch
import torch import torch_npu print("huawei Ascend npu runtime test") x = torch.randn(2, 2).npu() y = torch.randn(2, 2).npu() z = x.mm(y)
how can I modify torchtune code to test and run it on Ascend npu ? I can contribute this feature to torchtune
File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torchtune/config/_parse.py", line 50, in wrapper sys.exit(recipe_main(conf)) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/recipes/lora_finetune_single_device.py", line 503, in recipe_main recipe = LoRAFinetuneRecipeSingleDevice(cfg=cfg) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/recipes/lora_finetune_single_device.py", line 97, in init self._device = utils.get_device(device=cfg.device) File "/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/torchtune/utils/_device.py", line 115, in get_device device = torch.device(device) RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: npu