-
When you execute the 0x6c51758aa1ae9506602fffb9194da427fe948314b74eb93cdc9570558d4a88d in starknet-replay, the process gets Killed
In starknet-replay, in a x86 computer
```
cargo run --release tx 0…
-
MLIR tblgen support auto generating attr/type/op/enum docs, and a dialect doc that contains them all. This is done in [OpDocGen.cpp](https://github.com/llvm/llvm-project/blob/main/mlir/tools/mlir-tblg…
-
Failer op in [Inception_v4_vaiq_int8.default.onnx.torch.elide.mlir](https://gist.github.com/AmosLewis/c9b0933d0d1d69176d89298e7b1ff8b5#file-inception_v4_vaiq_int8-default-onnx-torch-elide-mlir)
- ht…
-
Here is an idea that could shorten the code for the lean-mlir parser substantially.
The gist of this idea is to not do any typechecking ourselves, but to syntactically transform the llvm expression…
-
-
python: /project/lib/Dialect/TritonGPU/IR/Dialect.cpp:52: llvm::SmallVector mlir::triton::gpu::getElemsPerThread(mlir::Attribute, llvm::ArrayRef, mlir::Type): Assertion `0 && "getElemsPerThread not im…
xueaa updated
3 months ago
-
`memoryview(memref).tolist()` does not print the values as expected for some dtypes, such as `bfloat16, float8`.
Also consider move `tolist()` and `pretty_print` APIs to MLIR-TRT's python bindings.
-
Thanks for the support to Mamba2, but I met problems running classification task using mamba2 on v100 gpus.
my env:
- torch 1.12.1+cu116
- triton 2.1.0
config for mamba2 is copied from function …
-
https://github.com/ROCm/AMDMIGraphX/pull/3010/files#r1648254890
With #3010 MIGraphX can fuse pointwise inputs for the Dot/conv instruction for MLIR.
It is not handling Reshapes that happen on po…
-
#### Issue description
using grad does not work when using dynamic one-shot.
* *Actual behavior:*
Crash happens in the following code:
```
@qml.qnode(dev, diff_method="best", mcm_method="one-s…