Closed IvanYashchuk closed 1 year ago
Not trying to block this PR.
think about a tensor torch.randn(5, 5).expand(-1).expand((5, 5, 5))
We still have issue with explicitly broadcast dimensions that I don't think is handled by this PR yet. With the API on TensorOpRecord, we can't properly pass correct contiguity information for an expanded dimension. I'm trying to keep the contiguity information more comprehensible on the python side (not skipping contiguity for broadcast dims).
Wondering if you want to patch that in this PR as well? otherwise I'll start one once this merges. cc'ing @IvanYashchuk
Closing in favor of https://github.com/csarofeen/pytorch/pull/2561
Fixes https://github.com/csarofeen/pytorch/issues/2549.