-
Hi, I've found a runtime error and made this minimized example:
```rust
#[test]
fn test_add_grad_decreasing_idx() {
let mut cx = Graph::new();
let a: GraphTensor = cx.tensor();
let…
-
HIPS/autograd entered into unmaintained status, so we should plan to move away from it.
1. Fork it and maintain it ourselves.
2. Choose a different library like JAX, tinygrad, any other options?
…
-
I have a question regarding the functionality of Ftorch. Does Ftorch currently support PyTorch's autograd feature for calculating first-order derivatives (Jacobians)? If not, are there any plans to ad…
-
Ever since originally adopting [`autograd`](https://github.com/HIPS/autograd), we've been concerned that most of the development energy from [`autograd`](https://github.com/HIPS/autograd) has moved to…
-
Wondering if anyone on the Bend team or in the community has started thinking about a neural network library that runs on Bend or more generally on the HVM. Is the Bend API mature enough for that righ…
-
Thank you very much for providing such a good tool!
My problem is that when the input `A` is a 'real' sparse matrix, not the sparse matrix converted from a dense matrix, the `torch.autograd.gradche…
zhf-0 updated
3 weeks ago
-
### Description
I use autograd to calculate partial derivatives of functions of two variables (x, y). Due to the end of support for autograd, I'm trying to get the same results using jax.
These …
-
## 🚀 Memory clearing mechanism with torch.autograd.Function integration
Today we pass "saved for backward" tensors to the generated backward function inside a Python list and within the generated f…
-
Could autograd be implemented so that the jacobian of the nodes be computed? Thanks!
-
### 🐛 Describe the bug
The test case fails when Dynamo inlines the builtin nn modules. It passes for backend "eager" but fails for backend "aot_eager"
~~~
import torch
import copy
import oper…