-
## 🐛 Bug
Similarly to (but independently of) #13179, there's a regression in speed of the rich progress bar between 1.5 and 1.6.
```
time: 2.059369374997914 # v1.5.10
time: 13.186531708983239 …
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_se…
-
So far [train_second.py](https://github.com/yl4579/StyleTTS2/blob/main/train_second.py) only works with DataParallel (DP) but not DistributedDataParalell (DDP). One major problem with this is if we si…
-
### Summary
ONNX Runtime raises `[ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Pad node. Name:'_0x57e2840_n19' Status Message: Cannot use 'reflect' mode to pad dimen…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_se…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1…
-
I am trying to use the Discriminator in the following way:
```
device = torch.device('cuda')
network_pkl = 'https://s3.eu-central-1.amazonaws.com/avg-projects/stylegan_xl/models/imagenet256.pkl'
…
-
### Bug description
When I use self.all_gather in LightningModule with strategies.DDPStrategy(static_graph=True) for multi-node inference,
the returned values are partially corrupted.
### What …
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_se…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_se…