Closed LucaRo29 closed 1 year ago
Hi @LucaRo29, thanks for reaching out. We are currently working on support for aten::_softmax_backward_data which will be available in our next release coming very soon.
aten::_softmax_backward_data has been implemented in the latest torch-directml, please try out
Even though
aten::_softmax_backward_data
is apparently supported, I am getting this runtime error with the code below.`def train(model, train_dataloader, loss_function, optimizer):
`
`RuntimeError: Could not run 'aten::_softmax_backward_data' with arguments from the 'DML' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_softmax_backward_data' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
CPU: registered at D:\a_work\1\s\pytorch-directml\build\aten\src\ATen\RegisterCPU.cpp:5926 [kernel] BackendSelect: fallthrough registered at D:\a_work\1\s\pytorch-directml\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Named: registered at D:\a_work\1\s\pytorch-directml\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] AutogradOther: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradCPU: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradCUDA: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradXLA: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradNestedTensor: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] UNKNOWN_TENSOR_TYPE_ID: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradPrivateUse1: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradPrivateUse2: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] AutogradPrivateUse3: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\VariableType_1.cpp:9683 [autograd kernel] Tracer: registered at D:\a_work\1\s\pytorch-directml\torch\csrc\autograd\generated\TraceType_1.cpp:11324 [kernel] Autocast: fallthrough registered at D:\a_work\1\s\pytorch-directml\aten\src\ATen\autocast_mode.cpp:250 [backend fallback] Batched: registered at D:\a_work\1\s\pytorch-directml\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback] VmapMode: fallthrough registered at D:\a_work\1\s\pytorch-directml\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]`