NVIDIA / TensorRT-Incubator

Experimental projects related to TensorRT
81 stars 12 forks source link

Issue with the `tensorrt.slice` mlir canonicalizer #366

Closed matthewfl closed 1 week ago

matthewfl commented 1 week ago

The following mlir fails under in the canonicalizer. If I disable the slice canonicalizer (link), then I am able to work around the issue. It seems like the issue is relating to the canonicalizer not propagating the shape information correctly.

// RUN: tensorrt-opt %s -split-input-file -canonicalize | FileCheck %s

func.func @tensorrt_slice(%arg0: tensor<?x8x151x151xf32>) -> tensor<?x151x151xf32> {
    %cst_i32_159 = tensorrt.constant dense<[0, 7, 0, 0]> : tensor<4xi32>
    %cst_i32_167 = tensorrt.constant dense<1> : tensor<4xi32>
    %cst_i32_168 = tensorrt.constant dense<[1, 151, 151]> : tensor<3xi32>
    %cst_i32_312 = tensorrt.constant dense<0> : tensor<1xi32>

    %1131 = tensorrt.shape %arg0 : tensor<?x8x151x151xf32> -> tensor<4xi32>
    %1132 = tensorrt.gather {axis = 0 : i64} ins(%1131, %cst_i32_312 : tensor<4xi32>, tensor<1xi32>) -> tensor<1xi32>
    %1287 = tensorrt.concatenation {axis = 0 : i32} ins(%1132, %cst_i32_168 : tensor<1xi32>, tensor<3xi32>) -> tensor<4xi32>
    %1288 = tensorrt.slice %arg0[%cst_i32_159: tensor<4xi32>][%1287: tensor<4xi32>][%cst_i32_167: tensor<4xi32>] : tensor<?x8x151x151xf32> to tensor<?x1x151x151xf32>
    %1289 = tensorrt.collapse_rank %1288 : tensor<?x1x151x151xf32> to tensor<?x151x151xf32>
    return %1289 : tensor<?x151x151xf32>
}
christopherbate commented 1 week ago

I think this was recently fixed, and likely captured in the last 2-3 "internal change integrations" we merged; let me confirm

christopherbate commented 1 week ago

I confirmed that this was recently fixed by https://github.com/NVIDIA/TensorRT-Incubator/commit/16303566158ccb70ee44460d0d3bb082965692c8, see discussion about the tensorrt.slice canonicalizer in the commit message body