TexasInstruments / edgeai-tidl-tools

Edgeai TIDL Tools and Examples - This repository contains Tools and example developed for Deep learning runtime (DLRT) offering provided by TI’s edge AI solutions.
Other
129 stars 33 forks source link

In custom-model-onnx.ipynb, Inferences Per Second of resnet18v2 is only 1.92 fps #9

Closed WanchaoYao closed 2 years ago

WanchaoYao commented 2 years ago

Hi, I want to know whether the 'Inferences Per Second' is credible on TDA4VM or just a kind of emulation? Because I think the fps cannot be so low on board...

Below is the output of notebook: [0] Indian elephant/Elephas maximus [1] tusker [2] African elephant/Loxodonta africana [3] warthog [4] water buffalo/water ox/Asiatic buffalo/Bubalus bubalis res Statistics : Inferences Per Second : 1.92 fps Inference Time Per Image : 521.45 ms
DDR BW Per Image : 0.00 M

kumardesappan commented 2 years ago

Hi, Could you share more details on the environment used for this test? Is it TDA4VM EVM of or X86_64 PC. The numbers for this model on EVM is expected to be much shorter. If TDA4VM, By any chance is debug mode is enabled for this execution?

paulacarrillo commented 2 years ago

Hi WanchaoYao, especifically please check debug_level is < 2 in compile_options. If that is the case, and you are running on EVM, please send me the steps you used so I can reproduce it. We can continue in E2E's thread if you would rather: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1075326/tda4vm-emulation-frame-rate-issue/3980193#3980193

WanchaoYao commented 2 years ago

Hi, Could you share more details on the environment used for this test? Is it TDA4VM EVM of or X86_64 PC. The numbers for this model on EVM is expected to be much shorter. If TDA4VM, By any chance is debug mode is enabled for this execution?

I was running on X86_64 PC. The notebook is custom-model-onnx.ipynb. Accoring to the README, Custom Model compialtion and infrnce is supported only on PC emualtion.

WanchaoYao commented 2 years ago

Hi WanchaoYao, especifically please check debug_level is < 2 in compile_options. If that is the case, and you are running on EVM, please send me the steps you used so I can reproduce it. We can continue in E2E's thread if you would rather: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1075326/tda4vm-emulation-frame-rate-issue/3980193#3980193

Yes, in custom-model-onnx.ipynb the debug_level is 3 indeed. But I change it to 0 and rerun, seemingly doesn't have conluence on the fps. So I wonder if the fps is meaningful in X86_64 PC emualtion mode.

paulacarrillo commented 2 years ago

WanchaoYao, if you want to check fps and other KPIs, you can use our TI EdgeAI cloud for a quick evaluation, profiling and initial development. No time need it to setup any HW or SW.

TI EdgeAI cloud is a hybrid cloud where you can run same notebooks than in github edgeai-tidl-tools (or pretty close versions as sometimes edgeai-tidl-tools is a bit ahead, so not always in sync, but for most of the time they are).

How it works, in few words, is that compilation happens in a docker container, and inference in an EVM assigned to you, from an EVM farm.

https://dev.ti.com/edgeaisession/

WanchaoYao commented 2 years ago

WanchaoYao, if you want to check fps and other KPIs, you can use our TI EdgeAI cloud for a quick evaluation, profiling and initial development. No time need it to setup any HW or SW.

TI EdgeAI cloud is a hybrid cloud where you can run same notebooks than in github edgeai-tidl-tools (or pretty close versions as sometimes edgeai-tidl-tools is a bit ahead, so not always in sync, but for most of the time they are).

How it works, in few words, is that compilation happens in a docker container, and inference in an EVM assigned to you, from an EVM farm.

https://dev.ti.com/edgeaisession/

Wow, thank you for your response. I tried it, and it works fine!