-
Dear contributors:
I am working on tensor contraction problems involving various formulas, such as:
$X(a,b)=\sum_{c,d}T(c,d)*H(a,b,c,d)$
$X(a,b)=\sum_{c,d}T(a,c)*T(b,d)*H(a,b,c,d)$
$X(a,b)=\sum_{c…
-
Hi @nullplay,
I wanted to start a discussion on Finch-MLIR MLIR tensors API. In https://github.com/pydata/sparse/tree/main/sparse/mlir_backend we have an initial Tensor class which provides constr…
-
this is the code:
```
import torch
from pytorch_block_sparse import BlockSparseLinear
x = torch.randn(32, 128).to('cuda')
y = torch.randn(32, 64).to('cuda')
model = torch.nn.Sequential(
…
-
maybe I can take advantage of these two libraries
- [cusparse](https://docs.nvidia.com/cuda/cusparse/index.html)
- [mir.sparse](http://docs.mir.dlang.io/latest/mir_sparse.html)
maybe this issue t…
-
Hello, I created a test script which I was testing on Aarch64 platform, for distilbert inference and using wanda sparsifier:
```
import torch
from transformers import BertForSequenceClassificatio…
-
Torch has support for sparse tensors, I don't think it will be hard to surface it up to DiffSharp (once we successfully codegen the necessary TorchSharp API, which @moloneymb is working on).
Do w…
dsyme updated
4 years ago
-
Are Block Sparse Tensors already supported or are there plan on supporting this?
Comming from Quantum Chemistry we often deal with sparsity limited to a specific range rather then a randomly equal di…
-
👋 Hello Neural Magic community developers,
I encountered an issue while calculating the perplexity for a locally converted Llama3-8B sparse model using the llm-compress library. I'm refer the spars…
-
### 🚀 The feature, motivation and pitch
I am working with weight matrices that look as follows:
As you can tell, considerable parts of the tensor are 0 and do not influence the output. Typical…
-
### 🐛 Describe the bug
**Description:**
In PyTorch 2.4, calling `.to(device)` on an `nn.Module` that contains sparse CSR tensors does not move the internal components of the sparse tensors (e.g., `c…