-
C. Elliott (2018): The simple essence of automatic differentiation. ICFP. ?
-
Hi PyEpo team,
There seems to be an error when I call backward on a loss produced by the `SPOPluss` module, when the loss was called with a batched input. Take for instance the test case as defined…
-
This is more of a question rather than an issue, but is there any plan or opinion on supporting Array API compatibility in autograd? Specifically, I'm wondering about the possibility of implementing m…
-
Autograd is really nice -- and right now there is not good way of calculating gradients for our singular `linear` model. It would be great to start adding in a generic framework to handle this.
-
Hi! Thank you for your fantastic work. When calculating the gradient of SDF flow (normal) at every timestamp, why did not you use torch.autograd function?
-
### 🐛 Describe the bug
torchbench_amp_bf16_training
xpu train torchrec_dlrm
ERROR:common:
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-op…
-
### 🐛 Describe the bug
In the following code we add a backend-specific autograd kernel(e.g. AutogradCUDA) for aten builtin operator "tanh". Originally, it does not have a kernel for "AutogradCUDA",…
-
UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please us…
-
This is a brief example that has been edited from the README.md file:
```python
import torch
import torch.nn as nn
import torch.optim as optim
from flashfftconv import FlashDepthWiseConv1d
B=4
…
-
Running `tests/test_spherical_precompute.py::test_transform_inverse_healpix` on a MacOS (`arm64`) runner with Python 3.9 or above generates a segmentation fault [with output](https://github.com/astro-…