-
We already have an e2e Matmul working with the in-flight Objectfifo backend.
The current in-flight branch being maintained by @jtuyls is https://github.com/nod-ai/iree-amd-aie/tree/jornt_cpp_pipeline…
-
### Request description
We've been using StableHLO downstream in the [IREE project](https://github.com/iree-org/iree), specifically these Bazel targets:
```bzl
"@stablehlo//:chlo_ops", …
-
When I run the demo in 'examples/bert.py' and invoke `iree_torch.compile_to_vmfb(linalg_on_tensors_mlir, args.iree_backend)`, it reports **expected mlir::RankedTensorType, but got: 'i32'**
Comple…
-
### Request description
See https://github.com/iree-org/iree/issues/6160, is the current expectation that an MHLO consumer can arbitrarily decide rounding behavior? (and so backends/devices may diffe…
-
Hi, I'm trying to get the h20ai's h2ogpt model supported via SHARK. The issue is that the model takes 6 long hrs for the IREE compilation which makes it very hard to debug the numeric issues which req…
-
Take this file as input:
```
func.func @conv2d_accumulate_2_32_32_32_times_3_3_64_dtype_i1_i1_i1(%lhs: tensor, %rhs: tensor, %acc: tensor) -> tensor {
%result = linalg.conv_2d_nchw_fchw {dilations =…
-
for the given IR
```mlir
module {
func.func @torch_jit(%arg0: !torch.vtensor, %arg2: !torch.vtensor, %arg3:!torch.vtensor ) -> !torch.vtensor attributes {torch.onnx_meta.ir_version = 7 : si64,…
-
Following the work at https://github.com/iree-org/iree/issues/17957 and https://github.com/iree-org/iree/issues/16203, it is just about time to migrate away from the GitHub Actions runners hosted on G…
-
## Pytorch v1.0 EagerMode
A small background from Pytorch's site on Eager Mode vs Graph/JIT/FX mode:
>"PyTorch supports two execution modes [1]: eager mode and graph mode. In eager mode, operat…
-
### What happened?
```
latest.mlir:3:10: error: 'func.func' op unhandled function with multiple blocks
%0 = flow.dispatch.region -> (tensor) {
^
latest.mlir:2:3: note: called from
…