-
### What happened?
Compiling a phi-2 model for vulkan-spirv backend with target triple rdna2-unknown-linux gives the following error:
```
failed to translate executables
haldump/configured_state_u…
-
The whole package idea I think is a must have to the julia ecosystem, while it still feels epxerimental.
Some of us in the julia community would like to use this to Llama2.jl gpu support, would be c…
-
The current version(bd9c18a4ce614b511216757d5962e934b56b2d09) also has a large amount of output when the microphone is silent
https://github.com/V-Sekai/godot-whisper/assets/153103332/13ce75ed-2c6f-4…
-
It seems like Intel Arc support is supposed to be present — it shows up as a Vulkan device at least! But in trying to run this I get compile failures Here's the detailed log from the command prompt on…
-
### What happened?
I hit an error building https://github.com/iree-org/iree/blob/main/samples/custom_dispatch/cuda/kernels/CMakeLists.txt on my Windows machine after a recent update to our LLVM submo…
-
1. Stride 2 conv2d:
`%8 = linalg.conv_2d_nhwc_hwcf_q {dilations = dense : vector, strides = dense : vector} ins(%3, %4, %c0_i32, %c0_i32 : tensor, tensor, i32, i32) outs(%7 : tensor) -> tensor
`
…
-
Observed while running the downstream https://github.com/nod-ai/sharktank/blob/main/sharktank/tests/types/dataset_test.py
```
sharktank/tests/types/dataset_test.py::DatasetTest::testDatasetRoundtr…
-
Trying to run off CPU. I have 16 gigs of RAM.
```
shark_tank local cache is located at C:\Users\Roman\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devic…
-
I obtained model.onnx from MLPerf in the first step and used
1. `iree-import-onnx model.onnx -o model.mlir` convert success
Then I used
2. `iree-compile --iree-input-type=onnx model.mlir --compile-…
-
Hi 🙂,
I'm working on exporting `transformers` PyTorch based models to MLIR with dynamic shapes.
Unfortunately, while the static shape compilation from `torch_mlir` seems to works fine, when en…