Closed ZzEeKkAa closed 1 month ago
Relatives,
Hi @fengyuan14,
since __rshift__.Scalar
is already in the list of operations that fall back to the CPU, does it make sense to add __lshift__.Scalar
and relatives to that list as well to speed up getting a working, albeit slow, version of these operations? https://github.com/intel/torch-xpu-ops/blob/35bea25e2e09b92067349a75cd0858a485c444fe/src/ATen/native/xpu/XPUFallback.template#L243
@anmyachev I've just checked your proposal and it is indeed resolves the problem. However native support is proffered it is just a working solution
Hi @fengyuan14,
since
__rshift__.Scalar
is already in the list of operations that fall back to the CPU, does it make sense to add__lshift__.Scalar
and relatives to that list as well to speed up getting a working, albeit slow, version of these operations?
Hi, please let me clarify our consideration. The operator coverage plan of PT2.4/PT2.5 is decided by PyTorch usages what we commit to support. The usage scope is to support,
The operators in the issue are not listed in our existing plan. However, the plan is changeable, we are listening to the community if there is any urgent requirement. And we will balance our limited efforts and pick them on at a proper moment. Thanks for your inputs. For the operators listed here, we will try to catch up PT2.5.
@fengyuan14 thank you for such a detailed and quick response!
Closing it, since it was implemented in #688
🚀 The feature, motivation and pitch
I'm getting this error while running intel's triton unit tests with upstream pytorch:
NotImplementedError: The operator 'aten::lshift.Scalar' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable
PYTORCH_ENABLE_XPU_FALLBACK=1
to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU.Alternatives
No response
Additional context
No response