-
### 🚀 The feature, motivation and pitch
It would be great to have a general parallel prefix sum (associative scan) operation in PyTorch, something like [associative_scan](https://jax.readthedocs.io…
-
> [!NOTE]
> 请不要手动修改表格
用于统计现阶段支持情况
基于: 5fb978f052f9b9de6645302943d20f29beb29d6d
| math | 动态图Tensor | 老IR Var…
-
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
…
-
### Expected Behavior
The expected model should not take this much time.
### Actual Behavior
It showing me that model loading will require more than 21 hours.
### Steps to Reproduce
[e_workflow.j…
-
Post-softmax BMM with batch 6 hangs with this setup for some reason:
```
pytest tests/python_api_testing/models/bert_large_performant/unit_tests/test_bert_large_matmuls_and_bmms_with_mixed_precision…
-
Design: #368
1. [ ] Sparse block lowering. (transformation)
1. [x] Sparse/Dense coordinates transformation. (@MasterJH5574 WIP)
1. The same sparse iterator viewed in different sparse…
-
### 🐛 Describe the bug
Following error is observed for out variants of topk, bmm and max ops:
Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemen…
-
https://projects.bcc.no/desk/tickets/10517494/messages
https://projects.bcc.no/desk/tickets/10511526/messages
- [x] test on saucelabs, lambdatest, browserstack
-
## 🐛 Bug
I found that the speed of `torch.einsum` when using fp16 is much slower than using fp32.
when the shapes of inputs are (a,b,c) and (a,c,d), `matmul` became much slower as well.
## To…
-
**Describe the bug**
When working on the revisions of the tutorial paper I noticed that when we estimate the `mixture3p` using random effects for the `set_size` variable that there are still random e…