-
As outlined in Meta Ticket #31991, a tensor can admit different types of decompositions / tensor networks. We propose to add visualizations to the tensors (depending on decomposition used), possibly…
-
### 🐛 Describe the bug
```python
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial res…
bhack updated
3 months ago
-
## 🚀 Feature
Currently you cannot use digits in names of tensors. Please allow digits, so that `torch.randn(64, 64, 4, 4, names=("height", "width", "channel_0", "channel_1"))` works rather than thr…
-
I have developed some code for hierarchical networks in `quimb` where all the tensors except the root are isometries (think MERA's and hierarchical tucker). The idea was to develop methods to truncate…
-
### Expected behavior
With default.qubit, the code works without any issues, so you can just change lightning.qubit by default.qubit to get the expected behaviour.
### Actual behavior
The code cras…
-
## 🚀 Feature
I would like to request the addition of support for Recurrent Neural Networks (RNNs) in the KFACOptimizer class. Currently, the KFACOptimizer class works for linear and 2D convolution …
-
Hey dev team.
I have been working on a PEPS algorithms for simulating 2D fermionic systems with the symmetric backend and it turned out the symmetric backend is very slow for PEPS tensors. I did som…
-
We should build a method to allow users to find the top eigenvector of a linear operator defined by a tensor network.
My API proposal is as such
```
tensor = tn.eigensolution(in_edges=list_of_e…
-
A usecase: storing a full backtracking pointer matrix can be okay for needleman/ctc alignment (4x memory saving compared to uint8 representation), if 2bit data type is used. Currently it's possible to…
-
**TL;DR:** Implementing block-sparse operations for faster matrix-multiplication.
Is this something worth adding to PyTorch?
Goals:
1. Faster matrix-multiplication by taking advantage of block-…