Cambricon / triton-linalg

Development repository for the Triton-Linalg conversion
Apache License 2.0
144 stars 12 forks source link

error: redefinition of symbol named 'transpose_memref' #4

Closed xuel0707 closed 4 months ago

xuel0707 commented 4 months ago
  1. run the command:triton-linalg-opt --convert-triton-to-linalg test/Dialect/LinalgExt/ops.mlir
  2. report a error: 33457f86a8d97751fe293091d661f75e
xuel0707 commented 4 months ago
  1. python3 ./triton-mlu-examples/add.py, to generate: add.ttir, in the path:$HOME/.triton/cache/eb94d36b18eabb943d62528aa76a9b05;
  2. triton-linalg-opt --convert-triton-to-linalg add.ttir, report a error as following: 2bffa900f9b3089163391ba75503db2d
xuel0707 commented 4 months ago

@sethbrin please have a look and fix it!

hesse-x commented 4 months ago

需要添加一个-split-input-file配置项,这个文件里有不止一个这个名字的func,所以需要添加这个配置项

xuel0707 commented 4 months ago

需要添加一个-split-input-file配置项,这个文件里有不止一个这个名字的func,所以需要添加这个配置项

OK,thx. image

xuel0707 commented 4 months ago
  1. python3 ./triton-mlu-examples/add.py, to generate: add.ttir, in the path:$HOME/.triton/cache/eb94d36b18eabb943d62528aa76a9b05;
  2. triton-linalg-opt --convert-triton-to-linalg add.ttir, report a error as following:
2bffa900f9b3089163391ba75503db2d

@Artorias123 can you have a try,the content of add.ttir is as following: module { tt.func public @add_0d1d2d3d(%arg0: !tt.ptr {tt.divisibility = 16 : i32}, %arg1: !tt.ptr {tt.divisibility = 16 : i32}, %arg2: !tt.ptr {tt.divisibility = 16 : i32}, %arg3: i32 {tt.divisibility = 16 : i32}) attributes {noinline = false} { %c1024_i32 = arith.constant 1024 : i32 %0 = tt.get_program_id x : i32 %1 = arith.muli %0, %c1024_i32 : i32 %2 = tt.make_range {end = 1024 : i32, start = 0 : i32} : tensor<1024xi32> %3 = tt.splat %1 : (i32) -> tensor<1024xi32> %4 = arith.addi %3, %2 : tensor<1024xi32> %5 = tt.splat %arg3 : (i32) -> tensor<1024xi32> %6 = arith.cmpi slt, %4, %5 : tensor<1024xi32> %7 = tt.splat %arg0 : (!tt.ptr) -> tensor<1024x!tt.ptr> %8 = tt.addptr %7, %4 : tensor<1024x!tt.ptr>, tensor<1024xi32> %9 = tt.load %8, %6 {cache = 1 : i32, evict = 1 : i32, isVolatile = false} : tensor<1024xf32> %10 = tt.splat %arg1 : (!tt.ptr) -> tensor<1024x!tt.ptr> %11 = tt.addptr %10, %4 : tensor<1024x!tt.ptr>, tensor<1024xi32> %12 = tt.load %11, %6 {cache = 1 : i32, evict = 1 : i32, isVolatile = false} : tensor<1024xf32> %13 = arith.addf %9, %12 : tensor<1024xf32> %14 = tt.splat %arg2 : (!tt.ptr) -> tensor<1024x!tt.ptr> %15 = tt.addptr %14, %4 : tensor<1024x!tt.ptr>, tensor<1024xi32> tt.store %15, %13, %6 {cache = 1 : i32, evict = 1 : i32} : tensor<1024xf32> tt.return } }

hesse-x commented 4 months ago

不好意思,看不见你的报错,我这里直接复制这段ir测试,报错是ir的格式错误,似乎这个ir并不是由我们依赖的triton版本生成的?triton原生的ir在不同版本改动较多,可能出现不同版本的ir无法正确识别的问题

xuel0707 commented 4 months ago

不好意思,看不见你的报错,我这里直接复制这段ir测试,报错是ir的格式错误,似乎这个ir并不是由我们依赖的triton版本生成的?triton原生的ir在不同版本改动较多,可能出现不同版本的ir无法正确识别的问题

OK,will close the issue.