-
Hi, your work is amazing!
I have a question about formula 7 in the paper. Since the feature maps with dim _cp_ and feature maps with dim _(c-cp)_ are concatenated before the PWConv, the channel of…
-
## Bug Description
The code below will segfault. I have a MWE below that will segfault if a material has `use_displaced_mesh = true` and the postprocessor does not. While the case below doesn't nee…
-
### 🐛 Describe the bug
`torch.compile` reduces `nn.Parameter` incorrectly to make the execution of model fail
```py
import torch
import torch.nn as nn
torch.manual_seed(420)
class Model2…
-
Repro:
```
from iree.compiler import compile_str
CODE = """
module {
func @main(%arg0: tensor, %arg1: tensor, %arg2: tensor) -> tensor {
%0 = "mhlo.scatter"(%arg0, %arg1, %arg2) ( {
…
-
Current linalg to TPP mapping is limited to specific `linalg.generic` ops. One of the requirements is that the generic contains only a single scalar operation within its body. This is due to the limit…
-
### 🐛 Describe the bug
For lowering graphs using compile_fx_inner, we are hitting a cyclical error re: decompositions for 20 operators. These operators are all present in the torch decomp table, but …
-
The original issue: https://github.com/dipterix/lazyarray/issues/3
Is it possible that subsetting a lazyarray again yields a lazyarray?
I am a bit puzzled whether I use your package correctly, e…
-
### 🐛 Describe the bug
With sum operator in int32 variant with output initialized to empty fails with **dtype argument and out dtype must match in reduction**
Please use below code to reproduce …
-
# 🐛 Bug
Despite my model and all tensors in the script being on the GPU, fit_gpytorch_model complains about tensors existing on both cuda:0 and CPU.
## To reproduce
This code works when I ju…
-
### 🐛 Describe the bug
When using `TorchRefsMode`, aot_function cannot handle `reshape`.
I am not sure that my usage is correct, but my goal is to decompose a graph to prim ops, in order to later …