TexasInstruments / edgeai-tidl-tools

Edgeai TIDL Tools and Examples - This repository contains Tools and example developed for Deep learning runtime (DLRT) offering provided by TI’s edge AI solutions.
Other
121 stars 27 forks source link

TIDLExecutionProvider Unavailable compiling YOLOv5 Ti Lite ONNX and PROTOTXT #30

Closed dpetersonVT23 closed 1 year ago

dpetersonVT23 commented 1 year ago

When I run onnxrt_ep.py in osrt_python I get the following error, how do I fix this? Why is TIDL not available? I am trying to compile my YOLOv5 Ti Lite ONNX and PROTOTXT for use on the Beaglebone AI.

Available execution providers : ['CPUExecutionProvider']

Running 1 Models - ['yolov5s6_640_ti_lite']

Running_Model : yolov5s6_640_ti_lite

/home/mm282681/miniconda3/envs/tidl-tools/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:56: UserWarning: Specified provider 'TIDLExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider' "Available providers: '{}'".format(name, ", ".join(available_provider_names))) Process Process-1: Traceback (most recent call last): File "/home/mm282681/miniconda3/envs/tidl-tools/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/mm282681/miniconda3/envs/tidl-tools/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "onnxrt_ep.py", line 190, in run_model sess = rt.InferenceSession(config['model_path'] ,providers=EP_list, provider_options=[delegate_options, {}], sess_options=so) File "/home/mm282681/miniconda3/envs/tidl-tools/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/mm282681/miniconda3/envs/tidl-tools/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 379, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: Unknown Provider Type: TIDLExecutionProvider

kumardesappan commented 1 year ago

Looks like your installation is not complete, after successful installation the ONNXRT shall have below execution providers

Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

But your log shows only Available execution providers : ['CPUExecutionProvider']

Please check below in your python, if this is not available. I would recomd a fresh installation by follwoing the user guide

pip3 show onnxruntime_tidl Name: onnxruntime-tidl Version: 1.7.0 Summary: ONNX Runtime is a runtime accelerator for Machine Learning models Home-page: https://onnxruntime.ai Author: Microsoft Corporation Author-email: onnxruntime@microsoft.com License: MIT License Location: /home/a0393754/miniconda/envs/benchmark/lib/python3.6/site-packages Requires: numpy, protobuf Required-by:

dpetersonVT23 commented 1 year ago

I got the exact output as shown when you ran pip show onnxruntime_tidl, is there any other way I can confirm why I do not have TIDL Execution/Compilation Provider?

I created a conda environment with Python 3.6, I am on a Linux PC with Intel CPU so I set the Device to am62, then I ran the setup script with no flags. Did I use the wrong device or did I miss a step in the setup? Could these be causes to the issue I am seeing? Should I be used a different script to compile my YOLOv5 Ti Lite model for deployment using my Linux PC?

srpraman commented 1 year ago

The thing is you have installed both onnxruntime-tidl (using setup.sh) as well as onnxruntime (using requirements.txt). The problem is whenever you try to import onnxruntime, by default python imports onnxruntime, which has only one service provider i.e. ['CPUExecutionProvider']

(venv) hp]:app$ pip list | grep onnxruntime
onnxruntime             1.10.0
onnxruntime-gpu         1.10.0
onnxruntime-tidl        1.7.0

Earlier I had three types of onnxruntime packages installed in my machine, and It was importing onnxruntime by default.

(venv) hp]:app$ python
Python 3.6.13 |Anaconda, Inc.| (default, Jun  4 2021, 14:25:59) 
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime as rt
>>> rt.get_available_providers()
['CPUExecutionProvider']

Steps to overcome this:

  1. Uninstall each variant of onnxruntime present in the system. pip uninstall onnxruntime / onnxruntime-gpu / onnxruntime-tidl
  2. Run the setup.sh script to create the tidl tool folder and onnxruntime-tidl will get installed on your system.
  3. Once the above step completes, check the installation using pip. Now you have all the required service providers.
    (venv) hp]:app$ pip list | grep onnxruntime
    onnxruntime-tidl        1.7.0
    (venv) hp]:app$ python
    Python 3.6.13 |Anaconda, Inc.| (default, Jun  4 2021, 14:25:59) 
    [GCC 7.5.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import onnxruntime as rt
    >>> rt.get_available_providers()
    ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']
yurkovak commented 1 year ago

@srpraman setup.sh seems to install onnxruntime-tidl only on PC, how do I get onnxruntime-tidl on the chip?