Open zou3519 opened 3 years ago
From discussion with Horace:
internal_new_from_data: allowing data_ptr access for TensorWrapper that doesn't wrap BatchedTensor sounds fine
What does XLA do? Do they not support this?
TODO: bring up with composability group somehow
Current fails for:
Calls
internal_new_from_data
: (#65)__getitem__
__rpow__
(straight up calls torch.tensor)Data pointer accessed by helper function (#65)
The norm problem (#14); AKA: CompositeImplicitAutograd op calls an "out= variant" that calls raw native::resize_ on tensors.
Requires an integer tensor for the "splits" argument...
Test by uncommenting out https://github.com/zou3519/functorch/blob/ae97def8eb8508418053a1a7c81371b9b44dcc3d/test/test_grad.py#L49. I haven't investigated the problems yet.
Miscellaneous non-OpInfo problems (test_torch.py)
Miscellaneous non-OpInfo problems (test_nn.py)
Miscellaneous non-OpInfo problems (test_linalg.py)
Miscellaneous non-OpInfo problems (test_tensor_creation.py)
Miscellaneous non-OpInfo problems test_unary_ufuncs.py
https://docs.google.com/spreadsheets/d/18sv-cKBqMGVCNdclFk5jB9LmQJGzb_eNAE9O2-oep3Q/edit?usp=sharing