vietanhdev / anylabeling

Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything, MobileSAM!!
https://anylabeling.nrl.ai
GNU General Public License v3.0
2.04k stars 222 forks source link

Encounter onnxruntime crash when trying to use tensorrt run quantized onnx model #88

Open summelon opened 1 year ago

summelon commented 1 year ago

Hi, thanks for your great work.

I am trying to improve the performance of anylabeling when the GPU and tensorrt backend is available.

I prepared several steps:

  1. Download your vit-b quantized onnx model
  2. Convert the model by using the symbolic_shape_infer.py from the official document
  3. Enable "trt_int8_enable" in TRT executor provider option

But I met following error:

2023-05-26 08:27:12.631758183 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1210 GetCapability] [TensorRT EP] No graph will run on TensorRT execution provider
2023-05-26 08:27:13.136179864 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-05-26 08:27:13.136197010 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
(2048, 2048, 3)
2023-05-26 08:27:13.995464756 [E:onnxruntime:Default, cuda_call.cc:119 CudaCall] CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 
2023-05-26 08:27:13.995665084 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Einsum node. Name:'/blocks.0/attn/Einsum' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/einsum_utils/einsum_auxiliary_ops.cc:298 std::unique_ptr<onnxruntime::Tensor> onnxruntime::EinsumOp::Transpose(const onnxruntime::Tensor&, const onnxruntime::TensorShape&, const gsl::span<const long unsigned int>&, onnxruntime::AllocatorPtr, void*, const Transpose&) 21Einsum op: Transpose failed: CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 

terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:124 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:117 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 700: an illegal memory access was encountered ; GPU=1 ; hostname=vision ; expr=cudaEventDestroy(event_); 

I saw you manually filter the TRT executer. Have you ever met similar issue like this? Thanks in advance.