microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.12k stars 2.84k forks source link

[Performance] Can't get GPU speed-up when exe program is located inside the path with chinese character #15678

Open zhanggd001 opened 1 year ago

zhanggd001 commented 1 year ago

Describe the issue

Can't get GPU speed-up when exe program is located inside the path with chinese character, even though I set OrtSessionOptionsAppendExecutionProvider_CUDA.

image image When the path with chinese character ("中文"), the inference time (with gpu config) is about 140 ms, which is same as cpu inference time.

image image When I rename the file ("zhongwen"), the exe program can get gpu speed-up and the inference time is about 17ms.

My gpu provider configure is OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, m_param.deviceId). And all the parameters used is fixed in my program as: param.modelPath = "model.onnx"; param.deviceId = 0; param.numWorkers = 2;

To reproduce

Exe program is located inside the path with chinese character, with OrtSessionOptionsAppendExecutionProvider_CUDA.

GPU/CUDA Environment: GPU RTX2060 image

Urgency

No response

Platform

Windows

OS Version

10

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.12.1-gpu

ONNX Runtime API

C++

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 11.6 GPU Driver 511.23

Model File

No response

Is this a quantized model?

No

pranavsharma commented 1 year ago

I don't see onnxruntime_providers_cuda.dll in the folder. Without that CUDA EP won't get added to the session.

zhanggd001 commented 1 year ago

I don't see onnxruntime_providers_cuda.dll in the folder. Without that CUDA EP won't get added to the session.

Thanks for your reply. Actually, this problem still exists with onnxruntime_providers_cuda.dll in the floder.

zhanggd001 commented 1 year ago

My initialization code is:

// load model file std::string modelPath = "mymodel.onnx"; FILE* pModelFile = fopen(modelPath.c_str(), "rb"); if (NULL == pModelFile) { std::cout << "Error, Opening Model from " << modelPath << " Failed!" << std::endl; return -1; }

int length = 0; fseek(pModelFile, 0, SEEK_END); length = ftell(pModelFile); fseek(pModelFile, 0, SEEK_SET); char* modelbuffer = nullptr; modelbuffer = new char[length]; if (nullptr == modelbuffer) { std::cout << "Error, Invalid Model. Model Size = 0!" << std::endl; return -1; }

fread((char*)modelbuffer, 1, length, pModelFile); fclose(pModelFile);

m_ortSession = nullptr;

m_ortEnv = Ort::Env(ORT_LOGGING_LEVEL_WARNING);

// session options Ort::SessionOptions session_options; session_options.SetIntraOpNumThreads(4); session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);

// set running platform OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0);

try { m_ortSession = new Ort::Session(m_ortEnv, modelbuffer, length, session_options); } catch (std::exception e) { std::cout << "Error, Session Init Failed!" << std::endl; return -1; }

delete[] modelbuffer;

RyanUnderhill commented 1 year ago

Can you try it on a later version of Onnxruntime? We added support for unicode paths on Windows at the very end of last year, so a newer version should work.