-
![微信图片_20220124013615](https://user-images.githubusercontent.com/98278412/150690728-e591038a-f3ae-4864-856c-f3a08ca9e1b6.png)
在执行这个文件的时候,denom = torch.sparse.sum(adj[batch, i], dim=1).to_dense()这一句代码…
-
### 🚀 The feature, motivation and pitch
The operator has been implemented in torch-xpu-ops. Need reevaluate the skipped cases (in run_test_with_skip.py).
https://github.com/intel/torch-xpu-ops/pull/…
-
Not sure what the original script did since it had a default alpha value of 0.
```py
parser.add_argument("--str", type=float, help="Strength of the rehydration (-0.05..0.05)", default=0, required=…
-
This note tries to summarize the current state of sparse tensor in pytorch. It describes important invariance and properties of sparse tensor, and various things need to be fixed (e.g. empty sparse te…
-
Hello,
1. The current implementation for matrix multiplication uses BRGEMM algorithm. Is there any implementation of "Low Rank Approximation approach" for matrix multiplication in oneDNN? Is there a…
-
单机多卡跑GlobalPoint模型,出现以上错误,其他模型多卡代码跑GlobalPoint没有报错
-
### 🐛 Describe the bug
Backward fails with sparse gradients. Error: `RuntimeError: reshape is not implemented for sparse tensors`
Code to reproduce:
```python
import torch
from typing import …
Dipet updated
11 months ago
-
## 🐛 Bug
When we call tensor.detach(), the operations supported on the detached tensor differ in case of sparse/dense.
## To Reproduce
```
import torch
t = torch.rand(3,3, requires_grad=T…
-
### 🚀 The feature, motivation and pitch
Hi,
I want to perform a sparse-dense BMM and compute gradients for the sparse matrix. Is there an operation in torch which does it efficiently? According to…
-
### 🚀 The feature, motivation and pitch
I am working on a problem that requires looking up sparse tensor values based on a batch of indices. The problem can be abstracted by the following example:
…