Closed lezcano closed 1 year ago
Actually, in https://github.com/pytorch/pytorch/pull/84624, the dispatch was changed just for cpu, cuda, and meta. If vector_norm
and matrix_norm
are properly supported in XLA, this could also be changed for XLA.
https://github.com/pytorch/pytorch/blob/32b2d8009a072234e6ff99127b8d96b6f41d1fff/torch/functional.py#L1489
Hmm, if I enable the test and leave the
ExpectCounterNotChanged("aten::.*", cpp_test::GetIgnoredCounters());
which checks whthere there is any fallback, will test pass(I hope?)
If they pass, uncommenting the tests LGTM
But moving forward, it'd be good to have lowerings for linalg.vector_norm
and the SVD so that we can add XLA
to that better path in PyTorch core:
https://github.com/pytorch/pytorch/blob/32b2d8009a072234e6ff99127b8d96b6f41d1fff/torch/functional.py#L1489
Thanks, I will mark this pr as a op lowering task and assign owner accordingly.
Maybe Meghan can take this task.
It just tests the vector 1-norm on a 3D tensor.
The test that actually tests the nuclear norm is called
TestNuclearNorm
and it's commented out.Now, all these operations are implemented in PyTorch core in terms of https://pytorch.org/docs/stable/generated/torch.linalg.matrix_norm.html, which is itself a composite function. This was added in https://github.com/pytorch/pytorch/pull/84624.
aten::norm
andaten::frobenius_norm
andaten::nuclear_norm
for BC with JIT models from previous versions, but are not currently accessible from the Python API (at least not through documented functions).