mlverse / torch

R Interface to Torch
https://torch.mlverse.org
Other
483 stars 66 forks source link

`nnf_prelu()` fails on MPS device with `Error Placeholder storage has not been allocated on MPS device!` #1128

Closed cregouby closed 6 months ago

cregouby commented 6 months ago

Current situation

Applying a torch::nnf_prelu() function on a tensor hosted on an MPS device fails with

#> Error in (function (self, weight) : Placeholder storage has not been allocated on MPS device!

when weightvalue is not a tensor on the same device.

ReprEx

library(tabnet)
x <- torch::torch_randn(2, 2)$to(device="mps")
x
#> torch_tensor
#>  0.1284  0.4025
#>  1.6335 -0.8445
#> [ MPSFloatType{2,2} ]
# correct with PReLU weight on the device
torch::nnf_prelu(x, weight = torch::torch_tensor(0.25)$to(device = "mps"))
#> torch_tensor
#>  0.1284  0.4025
#>  1.6335 -0.2111
#> [ MPSFloatType{2,2} ]
# Issue else PReLU
torch::nnf_prelu(x, weight = 0.25)
#> Error in (function (self, weight) : Placeholder storage has not been allocated on MPS device!
#> Exception raised from Placeholder at /Users/dfalbel/Documents/actions-runner/mlverse-m1/_work/libtorch-mac-m1/libtorch-mac-m1/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:263 (most recent call first):
#> frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 188 (0x105de4958 in libc10.dylib)
#> frame #1: at::native::mps::Placeholder::Placeholder(MPSGraphTensor*, at::Tensor const&, NSArray<NSNumber*>*, bool, MPSDataType) + 1336 (0x14ca3a630 in libtorch_cpu.dylib)
#> frame #2: at::native::prelu_mps(at::Tensor const&, at::Tensor const&) + 748 (0x14ca48e80 in libtorch_cpu.dylib)
#> frame #3: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::_prelu_kernel(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) + 1312 (0x14b1b86d0 in libtorch_cpu.dylib)
#> frame #4: at::_ops::_prelu_kernel::call(at::Tensor const&, at::Tensor const&) + 284 (0x149512f98 in libtorch_cpu.dylib)
#> frame #5: at::native::prelu(at::Tensor const&, at::Tensor const&) + 1640 (0x14882b72c in libtorch_cpu.dylib)
#> frame #6: at::_ops::prelu::call(at::Tensor const&, at::Tensor const&) + 284 (0x149190b2c in libtorch_cpu.dylib)
#> frame #7: at::prelu(at::Tensor const&, at::Tensor const&) + 40 (0x13da1b518 in liblantern.dylib)
#> frame #8: _lantern_prelu_tensor_tensor + 320 (0x13da1aee4 in liblantern.dylib)
#> frame #9: cpp_torch_namespace_prelu_self_Tensor_weight_Tensor(XPtrTorchTensor, XPtrTorchTensor) + 76 (0x12c65e80c in torchpkg.so)
#> frame #10: _torch_cpp_torch_namespace_prelu_self_Tensor_weight_Tensor + 340 (0x12c2091d4 in torchpkg.so)
#> frame #11: R_doDotCall + 268 (0x1030bc30c in libR.dylib)
#> frame #12: bcEval + 101932 (0x10310452c in libR.dylib)
#> frame #13: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #14: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #15: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #16: Rf_eval + 1308 (0x1030eb35c in libR.dylib)
#> frame #17: do_docall + 644 (0x10308a644 in libR.dylib)
#> frame #18: bcEval + 29540 (0x1030f2a64 in libR.dylib)
#> frame #19: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #20: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #21: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #22: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #23: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #24: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #25: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #26: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #27: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #28: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #29: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #30: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #31: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #32: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #33: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #34: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #35: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #36: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #37: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #38: Rf_eval + 1308 (0x1030eb35c in libR.dylib)
#> frame #39: do_eval + 1396 (0x10310be34 in libR.dylib)
#> frame #40: bcEval + 29540 (0x1030f2a64 in libR.dylib)
#> frame #41: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #42: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #43: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #44: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #45: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #46: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #47: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #48: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #49: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #50: forcePromise + 164 (0x103105ca4 in libR.dylib)
#> frame #51: Rf_eval + 728 (0x1030eb118 in libR.dylib)
#> frame #52: do_withVisible + 64 (0x10310c1c0 in libR.dylib)
#> frame #53: do_internal + 400 (0x103152f10 in libR.dylib)
#> frame #54: bcEval + 30012 (0x1030f2c3c in libR.dylib)
#> frame #55: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #56: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #57: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #58: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #59: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #60: forcePromise + 164 (0x103105ca4 in libR.dylib)
#> frame #61: getvar + 688 (0x103112af0 in libR.dylib)
#> frame #62: bcEval + 15992 (0x1030ef578 in libR.dylib)
#> frame #63: Rf_eval + 584 (0x1030eb088 in libR.dylib)
torch::nnf_prelu(x, weight = torch::torch_tensor(0.25))
#> Error in (function (self, weight) : Placeholder storage has not been allocated on MPS device!
#> Exception raised from Placeholder at /Users/dfalbel/Documents/actions-runner/mlverse-m1/_work/libtorch-mac-m1/libtorch-mac-m1/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:263 (most recent call first):
#> frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 188 (0x105de4958 in libc10.dylib)
#> frame #1: at::native::mps::Placeholder::Placeholder(MPSGraphTensor*, at::Tensor const&, NSArray<NSNumber*>*, bool, MPSDataType) + 1336 (0x14ca3a630 in libtorch_cpu.dylib)
#> frame #2: at::native::prelu_mps(at::Tensor const&, at::Tensor const&) + 748 (0x14ca48e80 in libtorch_cpu.dylib)
#> frame #3: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::_prelu_kernel(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) + 1312 (0x14b1b86d0 in libtorch_cpu.dylib)
#> frame #4: at::_ops::_prelu_kernel::call(at::Tensor const&, at::Tensor const&) + 284 (0x149512f98 in libtorch_cpu.dylib)
#> frame #5: at::native::prelu(at::Tensor const&, at::Tensor const&) + 1640 (0x14882b72c in libtorch_cpu.dylib)
#> frame #6: at::_ops::prelu::call(at::Tensor const&, at::Tensor const&) + 284 (0x149190b2c in libtorch_cpu.dylib)
#> frame #7: at::prelu(at::Tensor const&, at::Tensor const&) + 40 (0x13da1b518 in liblantern.dylib)
#> frame #8: _lantern_prelu_tensor_tensor + 320 (0x13da1aee4 in liblantern.dylib)
#> frame #9: cpp_torch_namespace_prelu_self_Tensor_weight_Tensor(XPtrTorchTensor, XPtrTorchTensor) + 76 (0x12c65e80c in torchpkg.so)
#> frame #10: _torch_cpp_torch_namespace_prelu_self_Tensor_weight_Tensor + 340 (0x12c2091d4 in torchpkg.so)
#> frame #11: R_doDotCall + 268 (0x1030bc30c in libR.dylib)
#> frame #12: bcEval + 101932 (0x10310452c in libR.dylib)
#> frame #13: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #14: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #15: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #16: Rf_eval + 1308 (0x1030eb35c in libR.dylib)
#> frame #17: do_docall + 644 (0x10308a644 in libR.dylib)
#> frame #18: bcEval + 29540 (0x1030f2a64 in libR.dylib)
#> frame #19: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #20: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #21: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #22: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #23: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #24: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #25: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #26: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #27: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #28: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #29: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #30: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #31: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #32: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #33: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #34: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #35: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #36: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #37: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #38: Rf_eval + 1308 (0x1030eb35c in libR.dylib)
#> frame #39: do_eval + 1396 (0x10310be34 in libR.dylib)
#> frame #40: bcEval + 29540 (0x1030f2a64 in libR.dylib)
#> frame #41: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #42: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #43: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #44: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #45: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #46: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #47: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #48: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #49: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #50: forcePromise + 164 (0x103105ca4 in libR.dylib)
#> frame #51: Rf_eval + 728 (0x1030eb118 in libR.dylib)
#> frame #52: do_withVisible + 64 (0x10310c1c0 in libR.dylib)
#> frame #53: do_internal + 400 (0x103152f10 in libR.dylib)
#> frame #54: bcEval + 30012 (0x1030f2c3c in libR.dylib)
#> frame #55: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #56: R_execClosure + 3084 (0x103107d0c in libR.dylib)
#> frame #57: Rf_applyClosure + 524 (0x10310658c in libR.dylib)
#> frame #58: bcEval + 27460 (0x1030f2244 in libR.dylib)
#> frame #59: Rf_eval + 584 (0x1030eb088 in libR.dylib)
#> frame #60: forcePromise + 164 (0x103105ca4 in libR.dylib)
#> frame #61: getvar + 688 (0x103112af0 in libR.dylib)
#> frame #62: bcEval + 15992 (0x1030ef578 in libR.dylib)
#> frame #63: Rf_eval + 584 (0x1030eb088 in libR.dylib)

Created on 2023-12-29 with reprex v2.0.2

cregouby commented 6 months ago

This is a pytorch/pytorch related issue.