Open SolveProb opened 1 month ago
Once you created a onnxruntime.InferenceSession
there is the method get_providers
that can be used.
Documentation: https://onnxruntime.ai/docs/api/python/api_summary.html#inferencesession
Once you created a
onnxruntime.InferenceSession
there is the methodget_providers
that can be used.Documentation: https://onnxruntime.ai/docs/api/python/api_summary.html#inferencesession
Thank you for your kind reminder. I am using the C++ API and did not find the relevant function in the C++ documentation.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Hi, I am a newcomer to the community and have some questions to ask.
The phenomenon I encountered is that for some reason, the inference process automatically switched from GPU to CPU. The specific phenomenon is: From 0 to 60 minutes, GPU memory occupied 500M, GPU utilization was about 50%, and CPU utilization was about 30% After 60 minutes, GPU memory occupied 500M, GPU utilization was dropped to 0%, and CPU utilization was increased to 100% Therefore, now I want to confirm which provider is used for each inference, monitor each inference, and automatically switch it to GPU provider if the last inference is on CPU.
After some searching, I found this GetAvailableProviders() function in the document, but the result it returns does not seem to accurately reflect the providers that can currently be executed normally.
I checked this issue 486 and the document, but still couldn't find the desired interface.
Currently using onnx 1.12.1 version using C++ API
Thanks very much.