xmba15 / onnx_runtime_cpp

small c++ library to quickly deploy models using onnxruntime
MIT License
327 stars 49 forks source link

ONNX Runtime inefence time #37

Closed yeluoo closed 1 year ago

yeluoo commented 1 year ago

I use gpu, superglue and superpoint to reason for a long time, they are 837ms and 1234ms respectively. What's the matter? The inference time is loaded here `auto start_time1 = std::chrono::steady_clock::now(); auto outputTensors = m_session.Run(Ort::RunOptions{nullptr}, m_inputNodeNames.data(), inputTensors.data(), m_numInputs, m_outputNodeNames.data(), m_numOutputs);

auto end_time1 = std::chrono::steady_clock::now();
auto elapsed_time = std::chrono::duration_cast<std::chrono::microseconds>(end_time1 - start_time1).count();
std::cout << " Inference time: " << elapsed_time << " microseconds" << std::endl;`