Open summelon opened 4 months ago
I think trt9.x is transition version, mainly for llm.
kgen
node is also a part of myelin compile result.
Thanks for your reply! So the ver.8.6.3 is the latest stable ver. for vision model before major ver. 10. Do you know which pull request is coping with the GEMM error issue of ver. 10?
Try to add --builderOptimizationLevel=5
@summelon for two version.
Hi, @lix19937. I tried polygraphy run decoder.onnx --trt --onnxrt --input-shapes image_embeddings:[1,256,64,64]
w/ or w/o --builder-optimization-level 5
. The difference did not change and still significant on 10.0.1
Description
I observed significant difference of GEMM output between ONNX(opset 18 + ort 1.18.0 + CPU) and TRT(10.0.1) results.
This only happens if the batch size of image_embeddings == 1 and trt ver >= 10.
Either (on trt ver == 8.6.3 and bs == any) or (on trt ver >= 10 and bs > 1), this won't happen.
I found the TensorRT release note of 10.1.0 mentioned a know issue: "There is a known accuracy issue when the network contains two consecutive GEMV operations (that is, MatrixMultiply with gemmM or gemmN == 1). To workaround this issue, try padding the MatrixMultiply input to have dimensions greater than 1." So I guess the fusion strategy is a bit different among:
I used
trex
to do some visualization on each converted engine: It seems that myelin compiler applies different optimizations for the aforementioned situations:myelin
nodekgen
nodekgen
node for each GEMMMy question is:
kgen
node andmyelin
node?Thanks in advance.
Environment
PyTorch docker image 24.05 from NGC
TensorRT Version: TensorRT 10.0.1.6
NVIDIA GPU: NVIDIA GeForce RTX 3090
NVIDIA Driver Version: 555.42.02
CUDA Version: 12.4.1
CUDNN Version: 9.1.0.70
Operating System: Ubuntu 22.04.4 LTS
Python Version: 3.10.12
Tensorflow Version: N/A
PyTorch Version: 2.4.0a0+07cecf4168.nv24.05
Baremetal or Container: nvcr.io/nvidia/pytorch:24.05-py3
Relevant Files
Model link: I think you can reproduce the issue based on any SAM decoder. The exported ONNX from here may work: SAM ONNX from AnyLabeling
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?: No, as this is mentioned as know issue in the latest release:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): Yes