intel / torch-xpu-ops

Apache License 2.0
30 stars 21 forks source link

[Draft] Add aten::_nested_from_padded #1045

Open min-jean-cho opened 2 weeks ago

min-jean-cho commented 2 weeks ago
min-jean-cho commented 2 weeks ago

Hi @daisyden, could you please help port the following UTs to torch-xpu-ops? Thanks! test_nested_tensor_from_padded: https://github.com/pytorch/pytorch/blob/main/test/test_nestedtensor.py#L2961-L2975 test_nested_tensor_from_padded_fused: https://github.com/pytorch/pytorch/blob/main/test/test_nestedtensor.py#L2977-L2991

fengyuan14 commented 2 weeks ago

I think we can leave Nested Tensor related operators for now,

  1. I asked workload team people, there is no usage of Nested Tensor operators in real popular workloads.
  2. We need a full plan to support Nested Tensor, like 1) we need upstream the dispatch key first. 2) Implement ~80 operators. 3) New test suite to support.
  3. Why does IPEX have the operators. It's an internal requirement to align op coverage with HPU.

According to the time line, it is a bit tight for us to catch up 2.6, if we need upstream the dispatch key. We can treat it as an entire feature to support Nested Tensor in 2.7.

@EikanWang Please comment and decide.