-
I wanted to install modin with the ray engine using pip: `pip install "modin[ray]"`, but this fails with the following error:
```
Collecting pandas==1.3.3 (from modin[ray])
Using cached pandas-…
-
### 🐛 Describe the bug
When trying to use `torchaudio.functional.lfilter` to generate training data, `lfilter` works as expected. However, when using it in the loss computation is returns a bunch of …
-
### 🐛 Describe the bug
I'm using a devcontainer to build on a Macbook with an intel chip. I've had to run the build command a few time while lower concurrency to get this far. I hope it's just a path…
-
### 🐛 Describe the bug
The `set_model_state_dict` in `torch.distributed.checkpoint.state_dict` does not add `module` prefix to buffers when loading state_dicts to models wrapped with DDP.
Code
``…
-
### 🐛 Describe the bug
```python
import torch
out = torch.empty(5).cuda()
b = torch.compile(torch.sin)(torch.zeros(5).cuda(), out=out)
```
results in the following triton code
```python
from…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
Migrated from [rt.perl.org#3306](https://rt-archive.perl.org/perl5/Ticket/Display.html?id=3306) (status was 'open')
Searchable as RT3306$
p5pRT updated
1 month ago
-
### 🐛 Describe the bug
If `torch.compile` with `reduce-overhead` is used in combination with `DistributedDataParallel` for both training and inference, a random crash happens during inference after…
-
### 🐛 Describe the bug
Test code:
```python
import torch
self = torch.randn([1,1,1,1], dtype=torch.complex64)
other = torch.randn([1,1,1,1,2], dtype=torch.float64)
self.view_as(other)
```
…
-
### 🐛 Describe the bug
`torch.export.unflatten` unflattens graph to keep the original hierarchy of nn.Module. However, the generated `call_module` node doesn't contain meta['val'] information that th…