microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.08k stars 2.84k forks source link

Mismatch in results for TensorRT session and cuda Session #20986

Open akmalmasud96 opened 3 months ago

akmalmasud96 commented 3 months ago

Describe the issue

I am creating onnxruntime session using tensorRT. While evaluating the model's output, The tolerance levels (atol and rtol) both could be passed at most at the value of 1e-3, when compared for cudaSession, cpuSession and tensorRT session.

However, When I tested it with Nvidia's polygraphy tool. For the atol and rtol values, they are getting passed for 1e-5, both.

To reproduce

Following code is used for the inference

import numpy as np
import torch
import onnxruntime as ort
def main():
    providers = [
                    ('CUDAExecutionProvider', {
                                    'device_id': 0,
                                    'cudnn_conv_algo_search': 'DEFAULT',
                                })
                ]
    providers_2= [
                    ('TensorrtExecutionProvider', {
                        'device_id': 0,                       # Select GPU to execute
                        "trt_engine_cache_enable": True,
                        "trt_engine_cache_path": "trt_models/"
                    }),
                    ('CUDAExecutionProvider', {
                        'device_id': 0,
                        'cudnn_conv_algo_search': 'DEFAULT',
                    })
                ]

    model_path = "./onnx_test_model/voxceleb_resnet293_LM.onnx"   
    sess_options = ort.SessionOptions()
    sess_options.inter_op_num_threads = 1
    sess_options.intra_op_num_threads = 1
    session_ = ort.InferenceSession(model_path, sess_options=sess_options, providers=providers)
    session_2 = ort.InferenceSession(model_path, sess_options=sess_options, providers=providers_2)
    for shape in list(range(860, 900, 10)):
        test = np.random.randn(1,shape,80).astype(np.float32)
        embedding_1 = session_.run(output_names=["embs"], input_feed={"feats": test} )[0][0]
        embeddings_2 = session_2.run(output_names=["embs"], input_feed={"feats": test},)[0][0]
        comparison_result = np.allclose(embedding_1, embeddings_2, rtol=1e-05, atol=1e-05)
        print("comparison_result",comparison_result)

if __name__ == "__main__":
    main()

The model can be downloaded from : https://huggingface.co/Wespeaker/wespeaker-voxceleb-resnet293-LM/blob/main/voxceleb_resnet293_LM.onnx

The polygraphy command is as follows polygraphy run onnx_test_model/voxceleb_resnet293_LM.onnx --trt --onnxrt --atol 1e-5 --rtol 1e-5 --input-shapes feats:[1,800,80]

I am using the docker image : nvcr.io/nvidia/tensorrt:24.05-py3

Urgency

I need to complete this, as early as possible. I have deadline for this till friday.

Platform

Linux

OS Version

Ubuntu 22.04.4 LTS

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18

ONNX Runtime API

Python

Architecture

X86

Execution Provider

Default CPU, CUDA, TensorRT

Execution Provider Library Version

cuda_12.4.r12.4, TensorRT-10.0.1.6

jywu-msft commented 2 months ago

I haven't used polygraphy before, but it looks to me like the comparisons aren't exactly apples to apples here? for OnnxRuntime, you're comparing TensorRT EP vs CUDA EP with a range of shapes [1, 860, 10] , [1, 870, 10] with random data etc. for polygraph you're comparing OnnxRuntime CPU EP vs TensorRT with explicit shape [1, 800, 10] and random data. In theory, if you feed in the same exact input data/shape (not random) to OnnxRuntime TensorRT EP and polygraphy with TensorRT backend, they should both return the same output (assuming they are using the same TensorRT version). Can you confirm that is the case? +@kevinch-nv for any advice he can provide with using polygraphy.

akmalmasud96 commented 2 months ago

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

jywu-msft commented 2 months ago

@jywu-msft

Thanks for replying. Let me explain this in a bit more detail. Polygraphy uses its own CPU ONNXRuntime session to compare the output with TensorRT. The shape we provide for the input will also be used to generate the same random input for the CPU ONNXRuntime, and this is passing the test. We can provide any input shape we want.

When I use ONNXRuntime for creating the TensorRT session, it did not pass the test when comparing the output with the ONNXRuntime CPU session. The same goes for using the ONNXRuntime CUDA session compared to the CPU ONNXRuntime session; the test did not pass. I think some optimization is happening while the ONNX model is running on the GPU, which drops the accuracy.

I also tested this with Polygraphy. I changed the ONNXRuntime session from the CPU provider to the GPU provider in Polygraphy and compared it with Polygraphy's version of TensorRT. The results failed while they passed when I used the CPU provider for comparison.

Additionally, I provided the Dynamic Shape Profiling while converting the model using ONNXRuntime but did not get any benefit.

I also compared the output with Torch. The most similar results were between the ONNXRuntime CPU session and Polygraphy's version of TensorRT. Attached are the benchmarks I did.

Untitled

thanks for the explanation. I'm going to reopen this. it's probably worth taking a closer look to see where the difference is coming from, whether there's some optimization pass in ORT that changes the graph. +@chilo-ms , can you take a look when you have time?

akmalmasud96 commented 2 months ago

Hi @jywu-msft, any update on this?