-
## š Bug
Calling a traced module in a for-loop with constant number of iterations from a scripted module is slower than tracing, at least with CUDA.
## To Reproduce
```python
import time
imā¦
-
I greatly appreciated your work, both for its simplicity of use and for your commitment. I'm probably wrong, but the library is very slow to use compared to other packages that do the same job.
I cā¦
-
### š Describe the bug
`logical_and`, `logical_or`, `logical_xor` operations trigger INTERNAL ASSERT FAIL when `input` is complex tensor on cuda and `other` is on cpu
```py
import torch
from torā¦
-
Hello,
Recently, I work on a task needs better support of SparseTensor for Pytorch and pytorch-sparse helps me a lot. It seems like currently it does not support operations between two SparseTensorā¦
-
I came across this library a while ago, and really loved the concept. I ended up taking some inspiration from it to add `DataFrame` support to Num.cr.
Mine is a bit less flexible since it uses `Naā¦
-
### š Describe the bug
When doing in-place operations on a CPU such as subtracting a column from all columns of a tensor, the following behavior occurs:
```python
import torch
a = torch.Tensor([[1ā¦
-
If we want to add more backends, we need to decouple numpy from the tensor class. I am thinking of having an IR that then maps to implementation in each backend. The tensor would then just call the IRā¦
-
## š Feature
Let `torch.diag` support batched matrix, e.g. tensors with >= 3 dimensions. Currently, `torch.diag` only supports a 1d or 2d tensor. See https://github.com/pytorch/pytorch/blob/53596cdā¦
-
I found this repo through twitter and found it to be really nicely written and performant so I wanted to use it for 3d segmentation, using a method from a github repo called sam3d. So I combined partsā¦
-
_This issue is a high-level summary of the Chrome team's feedback on WebNN, posting here for further discussion with WG members._
--
Google strongly supports the work of the Web ML WG to bring oā¦