-
## Issue description
I have a number of classes that derive directly from `nn.Sequential`, when I `torch.compile` models containing these classes and attempt to use them in conjunction with `checkp…
-
### 🐛 Describe the bug
error:
```
assert w1.eq(w2).all(), f"{rows=}, {cols=}, {w1=}, {w2=}"
AssertionError: rows=4, cols=4, w1=tensor([[ 1.5410, -0.2934, -2.1788, 0.5684],
[-1.0845, -1…
-
### Describe the bug
Hello!
I'm using WandB within Pytorch Lightning and am experiencing a crash after a number of hours. It's hard to tell from the logs what is causing the crash but I saw a sim…
-
### 🐛 Describe the bug
# Issue Message
```Bash
$python build_triton_wheel.py --device xpu
HEAD is now at c451d567 Update bfloat16 conversion property (#1945)
Traceback (most recent call la…
-
### 🐛 Describe the bug
Hello,
I'm not sure whether it is intended, but autocast seems not working on embedding module.
below is the link of a colab notebook that reproduce the issue
https://…
-
### 🐛 Describe the bug
AMP dynamic shape CPP wrapper
suite
name
thread
batch_size_new
speed_up_new
inductor_new
eager_new
compilati…
-
### 🐛 Describe the bug
Hello again. During debugging NaN issues with torch.compile, I've found out that simple nn.Sequential inheritance with "forward" redefinition throws error.
### Error logs
…
-
### 🐛 Describe the bug
I ran into a problem while permuting the following tensor (to convert into a complex dtype):
```python
>>> torch.view_as_complex(torch.empty(1,0,2,100,100).permute(0,1,3,…
-
### 🐛 Describe the bug
My code is the following:
```
resnet50 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resnet50', pretrained=True)
utils = torch.hub.load('NVIDIA/DeepLear…
-
### 🐛 Describe the bug
AMP dynamic shape CPP wrapper
suite
name
thread
batch_size_new
speed_up_new
inductor_new
eager_new
compilati…