Closed maondra closed 5 years ago
Can you attach the model for debugging?
It should be possible to reproduce this issue using script from https://pypi.org/project/onnxruntime-gpu/
we should have a separate example for onnxruntime-gpu vs onnxruntime pypi project description. currently CUDA/GPU support is for NN models, not traditional ML (e.g. scikit-learn) you can confirm that the gpu package does indeed work by using a NN model from onnx model zoo.
@maondra do you still require assistance on this issue? Here's an example of inferencing on GPU with tensorRT using azure machine learning: https://github.com/microsoft/onnxruntime/blob/master/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb
If you just want CUDA, you can install the gpu version from pypi.
ONNXRuntime doesn't use GPU for inference
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
ONNX Runtime installed from (source or binary): Pypi
ONNX Runtime version: onnxruntime-gpu 0.4.0
Python version: 3.6
CUDA/cuDNN version: 9.1/7.1
GPU model and memory: NVIDIA Quatro P1000 and P2200
To Reproduce run your example (https://pypi.org/project/onnxruntime-gpu/) on the notebook with mobile GPU NVIDIA Quatro P1000 or P2200
Expected behavior A clear and concise description of what you expected to happen.
Onnxruntime.get_device() -> GPU but run on CPU Expecting ONNXRuntime to run on GPU not CPU
Is it possible that ONNXRuntime evaluates to don't use GPU?