-
**Describe the bug**
When running a simple model including **torch.nn.LayerNorm using deepspeed zero3 with torch.compile and [compiled_autograd](https://github.com/pytorch/tutorials/blob/main/interme…
-
We currently "fudge" autograd.Function by running through the forward as if it was the function and rely on the differentiation of that to work.
(This is not good when there is `.detach()` or some su…
t-vi updated
3 months ago
-
``` bash
Engine pid=531663) File "/home/shaoyuw/miniconda3/envs/cu122/lib/python3.12/site-packages/torch/ut[0/1968]ice.py", line 79, in __torch_function__
(Engine pid=531663) return func(*args…
-
hi, when i ran the test.py after installation following:
```bash
cd models/dino/ops
python setup.py build install
# unit test (should see all checking is True)
python test.py
cd ../../..
```
…
-
I'm trying to Lora fine-tuning. I have decent results, but I see the following warnings.
```
/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1877: U…
-
Running llama backward pass generates select op from concat op from forward pass. Concat op is then lowered to select op in autograd pass. Next, in post-autograd stage, select op is decomposed in sequ…
-
Ever since originally adopting [`autograd`](https://github.com/HIPS/autograd), we've been concerned that most of the development energy from [`autograd`](https://github.com/HIPS/autograd) has moved to…
-
Hello author,
I hope you're doing well. I'm encountering an issue that seems to be related to KNN, but it's peculiar in that the error only occurs when I run the program in debug mode; it doesn't h…
-
### 🚀 The feature, motivation and pitch
Hi, I noticed that we already have https://github.com/pytorch/pytorch/pull/125946, which ported fbgemm related jagged tensor operator. Do we have a plan to reg…
-
### System Info
```Shell
- `Accelerate` version: 0.35.0.dev0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- `accelerate` bash location: /usr/local/bin/accelerate
- Python version: 3.10.12
-…