-
### 🐛 Describe the bug
With #110413, generator now can be passed as a parameter to fake mode dispatch. But the state of the generator passed won't change after the dispatch. As in fake mode there i…
-
### Bug description
When restarting from an existing checkpoint with a trainer that has a `max_steps` value set, the trainer does a single validation step before actually restarting the training epoc…
-
Hi!
Amazing job on the example implementations of all of these cutting edge training features! When trying to run the following configuration though, I ran into issues:
```
Llama 3 8B
data_pa…
-
### 🐛 Describe the bug
float32 static shape default wrapper
suite
name
thread
batch_size_new
speed_up_new
inductor_new
eager_new
co…
-
### 🐛 Describe the bug
I'm testing sd2.1 that batchsize in [1, 2, 4, 8]. it's fine when batchsize=1, and then it fails when batchsize switch to 2
after @BoyuanFeng fix, there is a new error
### Err…
-
### 🐛 Describe the bug
```
from triton.testing import do_bench
import torch
size = 1*1024 * 1024
def bench_sum(tensor):
return tensor.sum(dim=0)
def bench_cumsum(tensor):
re…
-
### 🐛 Describe the bug
Consider the following code:
```
train_data = imagenet_efficientnet_train(
dataset_params={
'root': config.TRAIN_DIR,
},
dataloader_params=co…
-
### 🐛 Describe the bug
```
import torch
from torch.func import grad
def f_dict(d):
return torch.tensor([torch.square(v).sum() for v in d.values()]).sum()
d = {'a': torch.ones(4, requir…
-
### 🐛 Describe the bug
The [documentation ](https://pytorch.org/docs/stable/generated/torch.logsumexp.html#torch-logsumexp) of the dim argument of torch.logsumexp says:
> dim ([int](https://docs.…
-
Seems like torch is already at 2.0 , which was the reason for the --upgrade flag mentioned in the description. Is the --upgrade flag still needed?
If I do this instead: !pip3 install torch
I get…