DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
MIT License
2.26k
stars
301
forks
source link
RuntimeError: Could not run 'aten::normal_' with arguments from the 'DML' backend. #337
>>> import torch
>>> t1 = torch.Tensor([[1., 2.], [3., 4.]]).to("dml") # works well
>>> t2 = torch.randn_like(t1) # raise RuntimeError
ErrorMessage:
RuntimeError: Could not run 'aten::normal_' with arguments from the 'DML' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::normal_' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
CPU: registered at /home/vsts/work/1/s/build/aten/src/ATen/RegisterCPU.cpp:5926 [kernel]
BackendSelect: fallthrough registered at /home/vsts/work/1/s/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: fallthrough registered at /home/vsts/work/1/s/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
AutogradOther: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradCPU: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradCUDA: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradXLA: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradNestedTensor: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradPrivateUse1: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradPrivateUse2: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
AutogradPrivateUse3: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/VariableType_4.cpp:8795 [autograd kernel]
Tracer: registered at /home/vsts/work/1/s/torch/csrc/autograd/generated/TraceType_4.cpp:10651 [kernel]
Autocast: fallthrough registered at /home/vsts/work/1/s/aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
Batched: registered at /home/vsts/work/1/s/aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: registered at /home/vsts/work/1/s/aten/src/ATen/VmapModeRegistrations.cpp:37 [kernel]
Environment
code
ErrorMessage:
And
torch.gather
meet the same question.Expectation
if i try like this:
it doesn't raise error.