intel / torch-xpu-ops

Apache License 2.0
14 stars 7 forks source link

Complement operator variants when implementing a required operator #197

Open fengyuan14 opened 2 months ago

fengyuan14 commented 2 months ago

🚀 The feature, motivation and pitch

As to staging goal of PyTorch 2.5, we collect 484 operators which are required working with XPU backend. Part of them are required XPU specific implementation. When we give XPU implementation for an ATen operator, we need register all variants of the operator, like xxx.out, xxx.Tensor, xxx.Scalar, xxx_ and so on. Following the rule,

  1. We won't take additional efforts to be back for lack of registration in future and complement them. Adding variants at the moment is cheap.
  2. When we align with CUDA registration, in-tree would be seamless.
chunhuanMeng commented 1 month ago

we can close this issue