-
Hi, is it possible to allow noop expands?
Example:
```rust
#[test]
fn noop_expand() {
let cx = Graph::new();
let a = cx.tensor::();
let mut b = a.clone();
b = a.expand::();
…
-
Thank you for your excellent work!
When I try to compile your code,I get some trouble.Could you please check for the error?If you have encountered it,please tell me the solution.Thank you!
CMake …
-
Hi, I find that before input the tensor into the network, you don't use torch.autograd.Variable to convert a tensor to a Variable. How does this work?
-
运行run_telechat_lora.sh脚本,其它文件未改动
处理数据时没有问题,进入微调后出现了报错
UnboundLocalError: cannot access local variable 'dim' where it is not associated with a value
***** Running training *****
Beginning of Epoc…
-
### 🐛 Describe the bug
Hi !
I'm trying to backward my model along multiple directions, so I'm using `torch.autograd.grad` with `is_grads_batched=True`. I had no problem using it on a MLP, but wh…
-
In order of importance:
- setup_context
- vmap (since we support vmap in torch.compile now!)
- jvp (we don't support jvp in torch.compile, so this is not going to work anytime soon.)
Repro:
```…
-
### 🚀 The feature, motivation and pitch
`functorch` is deprecated in favor of `torch.func` in PyTorch 2.0. The API [`functorch.make_functional`](https://pytorch.org/functorch/1.13/generated/functor…
-
### 🐛 Describe the bug
For common use cases, I think users would apply the optimizer each run which would clear the grads, and not run into this problem.
In polyfill, we decompose accumulate gr…
xmfan updated
1 month ago
-
### 🐛 Describe the bug
```python
lib = torch.library.Library("bar", "FRAGMENT")
lib.define("foo(Tensor x) -> Tensor")
def foo_impl(a):
return a.clone()
lib.impl("foo", foo_impl, "CPU…
-
Hi, I'm modifying torch-ngp code to create a model that updates the pose. I created a new custom autograd function, with forward and backward. However, I wonder why the backward I defined is not execu…