TexasInstruments / edgeai-benchmark

This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
https://github.com/TexasInstruments/edgeai
Other
3 stars 0 forks source link

YOLOv7 Custom Model Compilation Process #11

Open BJ-ZhaoXiaoyang opened 2 years ago

BJ-ZhaoXiaoyang commented 2 years ago

hello

The link shown as below describes how to use commands "run_custom_pc.sh" and "run_package_artifacts_evm.sh" to compile the custom model and how to modify the benchmark_custom.py and settings_base.yaml https://github.com/TexasInstruments/edgeai-benchmark/blob/master/docs/custom_models.md

I have some questions about this process: 1.How do I set whether the model runs on CPU or GPU or DSP? Is it set on settings_base.yaml file? How should it be set up? 2.How to choose model quantization, and will the quantified model automatically run on the DSP? 3.How should the "target_device" in settings_base.yaml be set up? My TI board is sk-tda4vm. 4.And I change the pipeline_confige in benchmark_custom.py shown as below. I found that the 'object_detection:meta_layers_names_list' in the session of every dist should be filled the proto file. But my yolov7 onnx model does't has the proto file. Is this a must?

'imagedet-7': dict(
            task_type='detection',
            calibration_dataset=imagedet_calib_dataset,
            input_dataset=imagedet_val_dataset,
            preprocess=preproc_transforms.get_transform_onnx(640, 640,  resize_with_pad=True, backend='cv2', pad_color=[114,114,114]),
            session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
                runtime_options=settings.runtime_options_onnx_np2(
                    det_options=True, ext_options={'object_detection:meta_arch_type': 6,
                     'advanced_options:output_feature_16bit_names_list':''
                     }),
                model_path=f'{settings.models_path}/self_model/yolov7-w6-pose.onnx'),
            postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()), #TODO: check this
            metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
            model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':37.4})
        ),

5.I modified the benchmark_custom.py and settings_base.yaml and try to run the "run_custom_pc.sh". Then the error occur as below. Could you please help me analyse it?

find: ‘./work_dirs/modelartifacts/8bits/’: No such file or directory
TIDL_TOOLS_PATH=/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/tidl_tools
LD_LIBRARY_PATH=/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/tidl_tools
PYTHONPATH=:
===================================================================
/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/datasets/__init__.py:51: UserWarning: kitti_lidar_det could not be imported - No module named 'edgeai_benchmark.datasets.kitti_object_eval_python'
  warnings.warn(f'kitti_lidar_det could not be imported - {str(e)}')
work_dir = ./work_dirs/modelartifacts/8bits
packaged_dir = ./work_dirs/modelartifacts_package/8bits
loading annotations into memory...
Done (t=0.55s)
creating index...
index created!
loading annotations into memory...
Done (t=0.65s)
creating index...
index created!
configs to run: ['imagedet-7_onnxrt_models_self_model_yolov7-w6-pose_onnx']
number of configs: 1
TASKS                                                       |          |     0% 0/1| [< ]                                                   |   0%|          || 
INFO:20221111-182422: starting process on parallel_device - 0
INFO:20221111-182430: model_path - /home/zhaoxiaoyang/Documents/edgeai-modelzoo/models/self_model/yolov7-w6-pose.onnx
INFO:20221111-182430: model_file - /home/zhaoxiaoyang/Documents/edgeai-benchmark-master/work_dirs/modelartifacts/8bits/imagedet-7_onnxrt_models_self_model_yolov7-w6-pose_onnx/model/yolov7-w6-pose.onnx
Downloading 1/1: /home/zhaoxiaoyang/Documents/edgeai-modelzoo/models/self_model/yolov7-w6-pose.onnx
Download done for /home/zhaoxiaoyang/Documents/edgeai-modelzoo/models/self_model/yolov7-w6-pose.onnx

INFO:20221111-182435: running - imagedet-7_onnxrt_models_self_model_yolov7-w6-pose_onnx
INFO:20221111-182435: pipeline_config - {'task_type': 'detection', 'calibration_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7fbd66ec6d68>, 'input_dataset': <edgeai_benchmark.datasets.coco_det.COCODetection object at 0x7fbd56306128>, 'preprocess': <edgeai_benchmark.preprocess.PreProcessTransforms object at 0x7fbd6c64cf60>, 'session': <edgeai_benchmark.sessions.onnxrt_session.ONNXRTSession object at 0x7fbd50a923c8>, 'postprocess': <edgeai_benchmark.postprocess.PostProcessTransforms object at 0x7fbd50a92438>, 'metric': {'label_offset_pred': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16, 15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31, 27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43, 39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56, 51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72, 63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85, 75: 86, 76: 87, 77: 88, 78: 89, 79: 90, 80: 91}}, 'model_info': {'metric_reference': {'accuracy_ap[.5:.95]%': 37.4}}}
INFO:20221111-182435: import  - imagedet-7_onnxrt_models_self_model_yolov7-w6-pose_onnx
WARNING : 'meta_layers_names_list' is not provided - running OD post processing in ARM mode 

TIDL Meta PipeLine (Proto) File  :   

Number of OD backbone nodes = 0 
Size of odBackboneNodeIds = 0 

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_4

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_9

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_14

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_19

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_24

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_29

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_34

 Slice layer : Unsupported onnxOpSetVersion 12 -- Slice_39
Error : Unsupported Opset for Resize OP 
Resize layer delegated to ARM -- 'Resize_199' 
Error : Unsupported Opset for Resize OP 
Resize layer delegated to ARM -- 'Resize_230' 
Error : Unsupported Opset for Resize OP 
Resize layer delegated to ARM -- 'Resize_261' 

Preliminary subgraphs created = 13 
Final number of subgraphs created are : 13, - Offloaded Nodes - 473, Total Nodes - 503 
2022-11-11 18:24:37.909453792 [E:onnxruntime:, inference_session.cc:1311 operator()] Exception during initialization: basic_string::_M_create
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/zhaoxiaoyang/miniconda3/envs/python36/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/utils/parallel_run.py", line 132, in _worker
    return task()
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/pipeline_runner.py", line 134, in _run_pipeline
    accuracy_result = accuracy_pipeline(description)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 119, in __call__
    param_result = self._run(description=description)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 145, in _run
    self._import_model(description)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 199, in _import_model
    self._run_with_log(session.import_model, calib_data)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/accuracy_pipeline.py", line 299, in _run_with_log
    return func(*args, **kwargs)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/sessions/onnxrt_session.py", line 53, in import_model
    self.interpreter = self._create_interpreter(is_import=True)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/sessions/onnxrt_session.py", line 132, in _create_interpreter
    provider_options=[runtime_options, {}], sess_options=sess_options)
  File "/home/zhaoxiaoyang/miniconda3/envs/python36/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
    self._create_inference_session(providers, provider_options)
  File "/home/zhaoxiaoyang/miniconda3/envs/python36/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 315, in _create_inference_session
    sess.initialize_session(providers, provider_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: basic_string::_M_create
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "./scripts/benchmark_custom.py", line 327, in <module>
    tools.run_accuracy(settings, work_dir, pipeline_configs)
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/tools/run_accuracy.py", line 80, in run_accuracy
    pipeline_runner.run()
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/pipeline_runner.py", line 79, in run
    return self._run_pipelines_parallel()
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/pipelines/pipeline_runner.py", line 113, in _run_pipelines_parallel
    results_list = parallel_exec.run()
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/utils/parallel_run.py", line 87, in run
    return self._run_parallel()
  File "/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/utils/parallel_run.py", line 107, in _run_parallel
    result = results_iterator.__next__(timeout=self.maxinterval)
  File "/home/zhaoxiaoyang/miniconda3/envs/python36/lib/python3.6/multiprocessing/pool.py", line 735, in next
    raise value
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: basic_string::_M_create
TASKS                                                       |   0%|          || 
-------------------------------------------------------------------
===================================================================
/home/zhaoxiaoyang/Documents/edgeai-benchmark-master/edgeai_benchmark/datasets/__init__.py:51: UserWarning: kitti_lidar_det could not be imported - No module named 'edgeai_benchmark.datasets.kitti_object_eval_python'
  warnings.warn(f'kitti_lidar_det could not be imported - {str(e)}')
settings: {'include_files': None, 'pipeline_type': 'accuracy', 'num_frames': 10000, 'calibration_frames': 50, 'calibration_iterations': 50, 'configs_path': './configs', 'models_path': '../edgeai-modelzoo/models', 'modelartifacts_path': './work_dirs/modelartifacts', 'datasets_path': './dependencies/datasets', 'target_device': None, 'target_machine': 'pc', 'run_suffix': None, 'parallel_devices': [0], 'tensor_bits': 8, 'runtime_options': None, 'run_import': True, 'run_inference': True, 'run_missing': True, 'detection_threshold': 0.3, 'detection_top_k': 200, 'detection_nms_threshold': None, 'detection_keep_top_k': None, 'save_output': False, 'model_selection': ['onnx'], 'model_exclusion': None, 'task_selection': 'detection', 'runtime_selection': ['onnxrt'], 'session_type_dict': {'onnx': 'onnxrt', 'tflite': 'tflitert', 'mxnet': 'tvmdlr'}, 'dataset_type_dict': {'imagenet': 'imagenetv2c'}, 'dataset_loading': ['coco'], 'config_range': None, 'enable_logging': True, 'verbose': False, 'experimental_models': False, 'rewrite_results': False, 'with_udp': False, 'flip_test': False, 'model_transformation_dict': None, 'report_perfsim': False, 'tidl_offload': True, 'input_optimization': False, 'run_dir_tree_depth': None, 'settings_file': 'settings_import_on_pc.yaml', 'basic_keys': ['include_files', 'pipeline_type', 'num_frames', 'calibration_frames', 'calibration_iterations', 'configs_path', 'models_path', 'modelartifacts_path', 'datasets_path', 'target_device', 'target_machine', 'run_suffix', 'parallel_devices', 'tensor_bits', 'runtime_options', 'run_import', 'run_inference', 'run_missing', 'detection_threshold', 'detection_top_k', 'detection_nms_threshold', 'detection_keep_top_k', 'save_output', 'model_selection', 'model_exclusion', 'task_selection', 'runtime_selection', 'session_type_dict', 'dataset_type_dict', 'dataset_loading', 'config_range', 'enable_logging', 'verbose', 'experimental_models', 'rewrite_results', 'with_udp', 'flip_test', 'model_transformation_dict', 'report_perfsim', 'tidl_offload', 'input_optimization', 'run_dir_tree_depth', 'settings_file'], 'dataset_cache': None}
no results found - no report to generate.
Report generated at ./work_dirs/modelartifacts

my settings_base.yaml:

pipeline_type : 'accuracy'

# target_device indicates the SoC for which the model compilation will take place
# see device_types for various devices in constants.TARGET_DEVICES_DICT
# currently this field is for information only
# the actual target device depends on the tidl_tools being used.
target_device : null

# important parameter. set this to 'pc' to do import and inference in pc
# set this to 'evm' to run inference in device. for inference on device run_import
# below should be switched off and it is assumed that the artifacts are already created.
# supported values: 'evm' 'pc'
target_machine : 'pc'

# quantization bit precision
# options are: 8 16 32
tensor_bits : 8

# run import of the model - only to be used in pc - set this to False for evm
# for pc this can be True or False
run_import : True

# run inference - for inference in evm, it is assumed that the artifacts folders are already available
run_inference : True

# for parallel execution on pc only (cpu or gpu).
# specify either a list of integers for parallel execution or null for sequentially execution
# if you are not using cuda compiled tidl on pc, these actual numbers in the list don't matter,
# but the size of the list determines the number of parallel processes
# if you have cuda compiled tidl, these integers wil be used for CUDA_VISIBLE_DEVICES. eg. [0,1,2,3,0,1,2,3]
# null will run the models sequentially.
parallel_devices : null #[0,1,2,3]

# number of frames for inference
num_frames : 10000 #50000

# number of frames to be used for post training quantization / calibration
calibration_frames : 50 #100

# number of itrations to be used for post training quantization / calibration
calibration_iterations : 50 #100

# runtime_options to be passed to the core session. default: null or a dict
# eg. (in next line and with preceding spaces to indicate this is a dict entry) accuracy_level : 0
# runtime_options :
#   accuracy_level: 1    #this is automaticallly set as 1 if you set tensor bits as 8
#   advanced_options:output_feature_16bit_names_list: '363,561' #layers that you want to be treated as 16 bit

# folder where benchmark configs are defined. this should be python importable
# # if this is None, the internally defined minimal set of configs will be used
configs_path : './configs'

# folder where models are available
models_path : '../edgeai-modelzoo/models'

# create your datasets under this folder
datasets_path : './dependencies/datasets'

# path where precompiled modelartifacts are placed
modelartifacts_path : './work_dirs/modelartifacts'

# session types to use for each model type
session_type_dict : {'onnx':'onnxrt', 'tflite':'tflitert', 'mxnet':'tvmdlr'}

# wild card list to match against model_path, model_id or model_type - if null, all models wil be shortlisted
# only models matching these criteria will be considered - even for model_selection
# examples: ['onnx'] ['tflite'] ['mxnet'] ['onnx', 'tflite']
# examples: ['resnet18.onnx', 'resnet50_v1.tflite'] ['classification'] ['imagenet1k'] ['torchvision'] ['coco']
# examples: [cl-0000, od-2020, ss-2580, cl-3090, cl-3520, od-5120, ss-5710, cl-6360, od-8050, od-8220, od-8420, ss-8610, kd-7060]
model_selection : ['onnx']

# wild card list to match against the tasks. it null, all tasks will be run
# example: ['classification', 'detection', 'segmentation', 'depth_estimation', 'human_pose_estimation', 'detection_3d']
# example: 'classification'
# example: null (Note: null means no filter - run all the tasks)
#task_selection : null
task_selection : 'detection'

# wild card list to match against runtime name. if null, all runtimes will be considered
# example: ['onnxrt', 'tflitert', 'tvmdlr']
# example: ['onnxrt']
#runtime_selection : null
runtime_selection : ['onnxrt']

# wild card list to match against dataset type - if null, all datasets will be shortlisted
# example: ['coco']
# example: ['imagenet', 'cocoseg21', 'ade20k', 'cocokpts', 'kitti_lidar_det', 'ti-robokit_semseg_zed1hd']
#dataset_loading : null
dataset_loading : ['coco']

# use TIDL offload to speedup inference
tidl_offload : True

# input optimization to improve FPS: False or null
# null will cause the default value set in sessions.__init__ to be used.
input_optimization : null

# detection threshold
# recommend 0.3 for best fps, 0.05 for accuracy measurement
detection_threshold : 0.3

# detection  - top_k boxes that go into nms
# (this is an intermediate set, not the final number of boxes that are kept)
# recommend 200 for best fps, 500 for accuracy measurement
detection_top_k : 200

# verbose mode - print out more information
verbose : False

# save detection, segmentation, human pose estimation output
save_output : False

# it defines if we want to use udp postprocessing in human pose estimation.
# Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased
# Data Processing for Human Pose Estimation (CVPR 2020).
with_udp : True

# it will add horizontally flipped images in info_dict and run inference over the flipped image also
flip_test : False

# enable use of experimental models - these model files may not be available in modelzoo in some cases
experimental_models : False

# dataset type to use if there are multiple variants for each dataset
# imagenetv2c is available for quick download - so use it in the release branch
dataset_type_dict:
  'imagenet': 'imagenetv2c'

Thanks!:)

BR xiaoyang

mathmanu commented 2 years ago
  1. You are using ONNXRTSession - model artifacts compiled for PC will run on EVM (with offloading to DSP) as well. (Not sure what you meant by GPU, we are only talking about PC and EVM here).

  2. https://github.com/TexasInstruments/edgeai-benchmark/blob/master/settings_base.yaml#L18 It is set to 8 bits here. You can set it to 32 to do a float simulation run on PC, but only 8 and 16 will work on the EVM.

  3. currently the parameter target_device is not used. It is reserved for use when we have several devices supported. So do not worry about it now.

  4. proto file is needed. My colleague informed me that yolov7 proto format is similar to that of yolov5 (we have added support for exporting protofile while training yolov5: https://github.com/TexasInstruments/edgeai-yolov5, https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/detection/coco/edgeai-yolov5). But he has not checked whether yolov7 model produces correct output or not with TIDL.

BJ-ZhaoXiaoyang commented 2 years ago

Thanks for your quick response!! And I also have a question about the "pipeline_config"-"session" in benchmark_custom.py.

session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
                runtime_options=settings.runtime_options_onnx_np2(
                    det_options=True, ext_options={'object_detection:meta_arch_type': 6,
                     'advanced_options:output_feature_16bit_names_list':''
                     }),
                model_path=f'{settings.models_path}/self_model/yolov7-w6-pose.onnx'),

1.What do these "input_optimization", "input_mean", "input_scale" mean? How should these arguments be set? 2.What dose the "runtime_options_onnx_np2" mean? Why will this method be selected? 3.What do these "det_options", "ext_options", "advanced_options:output_feature_16bit_names_list" mean? How should these arguments be set?

Thanks!

BJ-ZhaoXiaoyang commented 2 years ago

Hello, I tried to use the "export.py" script in this link "https://github.com/TexasInstruments/edgeai-yolov5" to export the onnx format model and prototxt file of my YOLOv7.pt model, but unfortunately it failed, and the error was:

Traceback (most recent call last):
  File "export.py", line 253, in <module>
    main(opt)
  File "export.py", line 248, in main
    run(**vars(opt))
  File "export.py", line 171, in run
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "/Users/s78rknd/Work/edgeai-yolov5-master/models/experimental.py", line 119, in attempt_load
    ckpt = torch.load(attempt_download(w), map_location=map_location)  # load
  File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 607, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 882, in _load
    result = unpickler.load()
  File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 875, in find_class
    return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'ReOrg' on <module 'models.common' from '/Users/s78rknd/Work/edgeai-yolov5-master/models/common.py'>

And I have an another question about the "preprocess" and "postprocess" the pipeline_confige in benchmark_custom.py shown as below.

'imagedet-7': dict(
            task_type='detection',
            calibration_dataset=imagedet_calib_dataset,
            input_dataset=imagedet_val_dataset,
            preprocess=preproc_transforms.get_transform_onnx(640, 640,  resize_with_pad=True, backend='cv2', pad_color=[114,114,114]),
            session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
                runtime_options=settings.runtime_options_onnx_np2(
                    det_options=True, ext_options={'object_detection:meta_arch_type': 6,
                     'advanced_options:output_feature_16bit_names_list':''
                     }),
                model_path=f'{settings.models_path}/self_model/yolov7-w6-pose.onnx'),
            postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()), #TODO: check this
            metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
            model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':37.4})
        ),

I looked at the implementation of these two functions “preproc_transforms.get_transform_onnx” and "postproc_transforms.get_transform_detection_yolov5_onnx", and this is different from the pre- and post-processing needs of my model, which cannot be adjusted by selecting the parameters of these two functions. Can I modify these two functions to match the pre- and post-processing needs of my model? If I modify it, will the artifact file generated be used to run on the model on the EVM?

I am looking forward your response:)

BJ-ZhaoXiaoyang commented 2 years ago

Hello, I have a new question about "artifacts" folder. After running the benchmark_custom.py script, not only the param.yaml file but also the artifacts file will be generated. I want to know what artifacts files are and what they contain? If I want my model to run on EVM, is an artifacts file necessary? Thanks!

mathmanu commented 2 years ago

TDA4VM has TIDL library that offload neural networks computations in to the DSP. This works behind OnnxRuntime, TFLite etc. These artifacts files contain information that is generated by TIDL and consumed by TIDL.

BJ-ZhaoXiaoyang commented 2 years ago

https://github.com/TexasInstruments/edgeai-benchmark/issues/11#issuecomment-1311762626 https://github.com/TexasInstruments/edgeai-benchmark/issues/11#issuecomment-1315317527 Could you please help respond to the above two questions I asked? Thank you very, very much!!

BJ-ZhaoXiaoyang commented 2 years ago

I also have a question about prototxt. I found that in the pipeline_confige of the benchmark_custom.py, some models do not require prototxt files, as shown below:

'imageseg-3': dict(
            task_type='segmentation',
            calibration_dataset=imageseg_calib_dataset,
            input_dataset=imageseg_val_dataset,
            preprocess=preproc_transforms.get_transform_jai((512,512), (512,512), backend='cv2', interpolation=cv2.INTER_LINEAR),
            session=sessions.ONNXRTSession(**jai_session_cfg,
                runtime_options=settings.runtime_options_onnx_np2(),
                model_path=f'{settings.models_path}/vision/segmentation/cocoseg21/edgeai-tv/deeplabv3lite_mobilenetv2_cocoseg21_512x512_20210405.onnx'),
            postprocess=postproc_transforms.get_transform_segmentation_onnx(),
            model_info=dict(metric_reference={'accuracy_mean_iou%':57.77})
        ),

while some models require prototxt files, as shown below:

'imagedet-5': dict(
            task_type='detection',
            calibration_dataset=imagedet_calib_dataset,
            input_dataset=imagedet_val_dataset,
            preprocess=preproc_transforms.get_transform_onnx((512, 512), (512, 512), backend='cv2', reverse_channels=True),
            session=sessions.ONNXRTSession(**onnx_bgr_session_cfg,
                runtime_options=settings.runtime_options_onnx_p2(
                    det_options=True, ext_options={'object_detection:meta_arch_type': 3,
                     'object_detection:meta_layers_names_list': f'{settings.models_path}/vision/detection/coco/edgeai-mmdet/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_model.**prototxt**'
                     }),
                model_path=f'{settings.models_path}/vision/detection/coco/edgeai-mmdet/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_model.onnx'),
            postprocess=postproc_transforms.get_transform_detection_mmdet_onnx(squeeze_axis=None,
                            normalized_detections=False, formatter=postprocess.DetectionBoxSL2BoxLS()),
            metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
            model_info=dict(metric_reference={'accuracy_ap[.5:.95]%': 32.8})
        ),

When are prototxt files needed and when not needed? AndHow do I generate prototxt files for the Customer model?

I am looking forward for your reponse!

Thanks!

Onehundred0906 commented 1 year ago

Hello, your YOLOV7 has been deployed successfully or not, because my work is related to this at present, we can communicate with each other. Expect your reply soon! Thanks!

Onehundred0906 commented 1 year ago

@BJ-ZhaoXiaoyang @debapriyamaji @kumardesappan Please help me explain the questions above if convenience, i will appreciate it very much!

zafeerali943 commented 9 months ago

@BJ-ZhaoXiaoyang @debapriyamaji @kumardesappan Please help me explain the questions above if convenience, i will appreciate it very much!

@Onehundred0906 @BJ-ZhaoXiaoyang Have you deployed yolov7?