-
I am trying to perform the following tensor operations [torch.cumsum()](https://pytorch.org/docs/stable/generated/torch.cumsum.html), and the quickest way i can think of is to convert the tensor to a …
-
In cases where the shape of a linalg operation is smaller or equal to the minimal tile size (which is 32) the operation is untouched and left as it is. That's the problem as our GPU pipeline expects a…
-
Found while running torchbench's `hf_Reformer` with thunderFX path (see below for the larger graph that fails).
```python
import torch
import thunder
def forward(x, double_scalar_tensor):
…
-
**Describe the bug**
I am unable to reshape tensor of shape [1,256,256] to [1,256,8,32] by keeping the tensor on device.
**To Reproduce**
Steps to reproduce the behavior:
1. Checkout to branch, …
-
### What happened?
For the attached IR, seeing error as
```
error: One or more operations with large vector sizes (8192 bytes) were found:
%255 = linalg.generic {indexing_maps = [#map16, #map17…
-
As we found out in tenstorrent/pytorch2.0_ttnn#198, several ops produce (1, N) tensors when (N,) tensors are expected.
Affected ops:
- `ceil`
- `floor`
- `gelu`
- `rsqrt`
- `sqrt`
Spared op…
-
---
### Bug Report: In-Place Operation Causes Gradient Error in `conv1d_step` Function
**Issue Description:**
While training the model, I encountered a runtime error related to gradient compu…
-
### Before
```python
import pytensor.tensor as pt
# Need to
a = pt.vector("a", shape=(2, ))
b = pt.vector("b", shape=(3, ))
# a + b fails due to broadcasting
# Transpose required
result…
-
### 🚀 The feature, motivation and pitch
I have a scenario where I'm currently calling `torch.linalg.ldl_factor` on a bunch of tensors and then repeatedly calling `torch.linalg.ldl_solve` as part of…
-
I am re-implementing the famous NeRF (https://github.com/bmild/nerf) model in candle. During my experience with candle, I found the following Tensor operations missing.
* `torch.linspace`
* `torch.c…