microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.09k stars 2.84k forks source link

How to inference only use cpu #1942

Closed WilliamZhaoz closed 4 years ago

WilliamZhaoz commented 4 years ago

I inference my model in onnxruntime on a cpu + gpu device, I hope it only use cpu for inference to test the performance, but it always use gpu automaticly. How can I use only cpu.

snnn commented 4 years ago

Are you using the python API or C/C++ API?

WilliamZhaoz commented 4 years ago

I use python api

Sent from my iPhone

On Sep 28, 2019, at 12:11 AM, Changming Sun notifications@github.com wrote:

Are you using the python API or C/C++ API?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

snnn commented 4 years ago

Please build onnxruntime from source code without enabling CUDA.

https://github.com/microsoft/onnxruntime/blob/master/BUILD.md

WilliamZhaoz commented 4 years ago

In this way, can I use python api since I use python API now. After I build from source, how can I use python API?

Thanks.

Changming Sun notifications@github.com 于2019年9月28日周六 上午5:24写道:

Please build onnxruntime from source code without enabling CUDA.

https://github.com/microsoft/onnxruntime/blob/master/BUILD.md

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2K35JFOMRZ5JBHO2HDQLZ2Z7A5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD72EDSQ#issuecomment-536101322, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2NA6W447AY6DXJM7UTQLZ2Z7ANCNFSM4I3CCYVA .

jywu-msft commented 4 years ago

you can build a python wheel from source (use the --build_wheel option when invoking build.sh) and install it.

in master, there's a new python api to force cpu execution even if gpu is enabled.

sess = ort.InferenceSession('model.onnx') if gpu enabled, get_providers() should return ['CUDAExecutionProvider', 'CPUExecutionProvider'] sess.get_providers() force cpu execution sess.set_providers(['CPUExecutionProvider']) sess.run(...)

faxu commented 4 years ago

any reason to not just use the CPU-only package? https://pypi.org/project/onnxruntime/

jywu-msft commented 4 years ago

any reason to not just use the CPU-only package? https://pypi.org/project/onnxruntime/

there are multiple options. depends on what the OP is trying to accomplish. if they just want to test CPU, yes, perhaps installing CPU-only package from pypi is easiest.

if OP wanted to test CPU vs GPU using a single package, one can use the runtime api's described above to select between them. Another option would be to install both CUDA and CPU packages from pypi under different conda environments and switching between them.

WilliamZhaoz commented 4 years ago

Thank you @Geroge Wu and Faith Xu,

I'll try your solution @Geroge Wu, thanks. BTW, I just install from the pypi install link you share with me @Faith Xu, but when I inference my model on a cpu+gpu device, I can see the model run on both cpu and gpu, I do not know why. I only want to inference my model in cpu.

Thanks.

George Wu notifications@github.com 于2019年10月1日周二 上午12:43写道:

any reason to not just use the CPU-only package? https://pypi.org/project/onnxruntime/

there are multiple options. depends on what the OP is trying to accomplish. if they just want to test CPU, yes, perhaps installing CPU-only package from pypi is easiest.

if OP wanted to test CPU vs GPU using a single package, one can use the runtime api's described above to select between them. Another option would be to install both CUDA and CPU packages from pypi under different conda environments and switching between them.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2PQMHBEPIQIWB3TAJ3QMIUEXA5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD76JNQY#issuecomment-536647363, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2JHC7DSPJ2HNQPUMDLQMIUEXANCNFSM4I3CCYVA .

jywu-msft commented 4 years ago

that doesn't sound right re: installing cpu package from pypi but seeing the model run on gpu. can you share the output of the following commands:

import onnxruntime onnxruntime.__file__ onnxruntime.get_device()

Thank you @geroge Wu and Faith Xu, I'll try your solution @geroge Wu, thanks. BTW, I just install from the pypi install link you share with me @Faith Xu, but when I inference my model on a cpu+gpu device, I can see the model run on both cpu and gpu, I do not know why. I only want to inference my model in cpu.

WilliamZhaoz commented 4 years ago

Hi George, I got following: [image: Screen Shot 2019-10-04 at 1.08.43 AM.png] but when I inference my model in onnxruntime, I can see my gpu is working. thanks.

George Wu notifications@github.com 于2019年10月2日周三 下午11:38写道:

that doesn't sound right re: installing cpu package from pypi but seeing the model run on gpu. can you share the output of the following commands:

import onnxruntime onnxruntime.file onnxruntime.get_device()

Thank you @geroge https://github.com/geroge Wu and Faith Xu, I'll try your solution @geroge https://github.com/geroge Wu, thanks. BTW, I just install from the pypi install link you share with me @Faith https://github.com/Faith Xu, but when I inference my model on a cpu+gpu device, I can see the model run on both cpu and gpu, I do not know why. I only want to inference my model in cpu.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2KR2N42I4JYL5GFIW3QMS56FA5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAFGGBI#issuecomment-537551621, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2NSFZX6ADTGFT7HFATQMS56FANCNFSM4I3CCYVA .

jywu-msft commented 4 years ago

i can't see your screenshot, can you paste it as text? are you using nvidia-smi to monitor gpu usage?

WilliamZhaoz commented 4 years ago

the out is: " Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information.

import onnxruntime onnxruntime.file Traceback (most recent call last): File "", line 1, in AttributeError: module 'onnxruntime' has no attribute 'file' onnxruntime.get_device() 'CPU' onnxruntime.file() Traceback (most recent call last): File "", line 1, in AttributeError: module 'onnxruntime' has no attribute 'file'

" yes, I using nvidis-smi, when I inference the model, I find gpu is working, and when the inference process is done, the gpu is also idle

George Wu notifications@github.com 于2019年10月4日周五 上午1:17写道:

i can't see your screenshot, can you paste it as text? are you using nvidia-smi to monitor gpu usage?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2MQ63YXBGEZ2QZO2P3QMYSJHA5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAI5ORA#issuecomment-538040132, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2JV4S6HJAFQQBVCDHLQMYSJHANCNFSM4I3CCYVA .

jywu-msft commented 4 years ago

onnxruntime.__file__ not onnxruntime.file

jywu-msft commented 4 years ago

and nvidia-smi is showing gpu utilization from the same onnxruntime python process, not something else?

you don't have multiple versions of onnxruntime installed?

WilliamZhaoz commented 4 years ago

Hi George Wu, sorry for late reply due to national day vocation. following are outputs: "

onnxruntime.get_device() 'CPU' onnxruntime.file '/home/zhiyuan/anaconda3/envs/psnet/lib/python3.7/site-packages/onnxruntime/init.py'

" and, yes, nvidia-smi show my gpu is working only when I run the onnxruntime python process, not something else.

George Wu notifications@github.com 于2019年10月4日周五 上午1:24写道:

and nvidia-smi is showing gpu utilization from the onnxruntime python process, not something else?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2JPHZQ7SQLHW2DLK23QMYTGPA5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAI6GJA#issuecomment-538043172, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2O7FWLEG3LIQIBACWDQMYTGPANCNFSM4I3CCYVA .

WilliamZhaoz commented 4 years ago

Hi George Wu, I double checked my code, I found it is a data preprocess used GPU, sorry for the confusion. Thanks for your patient.

Zhiyuan Zhao zhiyuan.zhaochn@gmail.com 于2019年10月8日周二 上午10:36写道:

Hi George Wu, sorry for late reply due to national day vocation. following are outputs: "

onnxruntime.get_device() 'CPU' onnxruntime.file

'/home/zhiyuan/anaconda3/envs/psnet/lib/python3.7/site-packages/onnxruntime/init.py'

" and, yes, nvidia-smi show my gpu is working only when I run the onnxruntime python process, not something else.

George Wu notifications@github.com 于2019年10月4日周五 上午1:24写道:

and nvidia-smi is showing gpu utilization from the onnxruntime python process, not something else?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/issues/1942?email_source=notifications&email_token=AIWSN2JPHZQ7SQLHW2DLK23QMYTGPA5CNFSM4I3CCYVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAI6GJA#issuecomment-538043172, or mute the thread https://github.com/notifications/unsubscribe-auth/AIWSN2O7FWLEG3LIQIBACWDQMYTGPANCNFSM4I3CCYVA .

jywu-msft commented 4 years ago

Thanks for confirming there's no problem. Will close this issue now.