Open BJ-ZhaoXiaoyang opened 2 years ago
You are using ONNXRTSession - model artifacts compiled for PC will run on EVM (with offloading to DSP) as well. (Not sure what you meant by GPU, we are only talking about PC and EVM here).
https://github.com/TexasInstruments/edgeai-benchmark/blob/master/settings_base.yaml#L18 It is set to 8 bits here. You can set it to 32 to do a float simulation run on PC, but only 8 and 16 will work on the EVM.
currently the parameter target_device is not used. It is reserved for use when we have several devices supported. So do not worry about it now.
proto file is needed. My colleague informed me that yolov7 proto format is similar to that of yolov5 (we have added support for exporting protofile while training yolov5: https://github.com/TexasInstruments/edgeai-yolov5, https://github.com/TexasInstruments/edgeai-yolov5/tree/master/pretrained_models/models/detection/coco/edgeai-yolov5). But he has not checked whether yolov7 model produces correct output or not with TIDL.
Thanks for your quick response!! And I also have a question about the "pipeline_config"-"session" in benchmark_custom.py.
session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
runtime_options=settings.runtime_options_onnx_np2(
det_options=True, ext_options={'object_detection:meta_arch_type': 6,
'advanced_options:output_feature_16bit_names_list':''
}),
model_path=f'{settings.models_path}/self_model/yolov7-w6-pose.onnx'),
1.What do these "input_optimization", "input_mean", "input_scale" mean? How should these arguments be set? 2.What dose the "runtime_options_onnx_np2" mean? Why will this method be selected? 3.What do these "det_options", "ext_options", "advanced_options:output_feature_16bit_names_list" mean? How should these arguments be set?
Thanks!
Hello, I tried to use the "export.py" script in this link "https://github.com/TexasInstruments/edgeai-yolov5" to export the onnx format model and prototxt file of my YOLOv7.pt model, but unfortunately it failed, and the error was:
Traceback (most recent call last):
File "export.py", line 253, in <module>
main(opt)
File "export.py", line 248, in main
run(**vars(opt))
File "export.py", line 171, in run
model = attempt_load(weights, map_location=device) # load FP32 model
File "/Users/s78rknd/Work/edgeai-yolov5-master/models/experimental.py", line 119, in attempt_load
ckpt = torch.load(attempt_download(w), map_location=map_location) # load
File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/opt/anaconda3/envs/python36/lib/python3.6/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
AttributeError: Can't get attribute 'ReOrg' on <module 'models.common' from '/Users/s78rknd/Work/edgeai-yolov5-master/models/common.py'>
And I have an another question about the "preprocess" and "postprocess" the pipeline_confige in benchmark_custom.py shown as below.
'imagedet-7': dict(
task_type='detection',
calibration_dataset=imagedet_calib_dataset,
input_dataset=imagedet_val_dataset,
preprocess=preproc_transforms.get_transform_onnx(640, 640, resize_with_pad=True, backend='cv2', pad_color=[114,114,114]),
session=sessions.ONNXRTSession(**utils.dict_update(onnx_session_cfg, input_optimization=False, input_mean=(0.0, 0.0, 0.0), input_scale=(0.003921568627, 0.003921568627, 0.003921568627)),
runtime_options=settings.runtime_options_onnx_np2(
det_options=True, ext_options={'object_detection:meta_arch_type': 6,
'advanced_options:output_feature_16bit_names_list':''
}),
model_path=f'{settings.models_path}/self_model/yolov7-w6-pose.onnx'),
postprocess=postproc_transforms.get_transform_detection_yolov5_onnx(squeeze_axis=None, normalized_detections=False, resize_with_pad=True, formatter=postprocess.DetectionBoxSL2BoxLS()), #TODO: check this
metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
model_info=dict(metric_reference={'accuracy_ap[.5:.95]%':37.4})
),
I looked at the implementation of these two functions “preproc_transforms.get_transform_onnx” and "postproc_transforms.get_transform_detection_yolov5_onnx", and this is different from the pre- and post-processing needs of my model, which cannot be adjusted by selecting the parameters of these two functions. Can I modify these two functions to match the pre- and post-processing needs of my model? If I modify it, will the artifact file generated be used to run on the model on the EVM?
I am looking forward your response:)
Hello, I have a new question about "artifacts" folder. After running the benchmark_custom.py script, not only the param.yaml file but also the artifacts file will be generated. I want to know what artifacts files are and what they contain? If I want my model to run on EVM, is an artifacts file necessary? Thanks!
TDA4VM has TIDL library that offload neural networks computations in to the DSP. This works behind OnnxRuntime, TFLite etc. These artifacts files contain information that is generated by TIDL and consumed by TIDL.
https://github.com/TexasInstruments/edgeai-benchmark/issues/11#issuecomment-1311762626 https://github.com/TexasInstruments/edgeai-benchmark/issues/11#issuecomment-1315317527 Could you please help respond to the above two questions I asked? Thank you very, very much!!
I also have a question about prototxt. I found that in the pipeline_confige of the benchmark_custom.py, some models do not require prototxt files, as shown below:
'imageseg-3': dict(
task_type='segmentation',
calibration_dataset=imageseg_calib_dataset,
input_dataset=imageseg_val_dataset,
preprocess=preproc_transforms.get_transform_jai((512,512), (512,512), backend='cv2', interpolation=cv2.INTER_LINEAR),
session=sessions.ONNXRTSession(**jai_session_cfg,
runtime_options=settings.runtime_options_onnx_np2(),
model_path=f'{settings.models_path}/vision/segmentation/cocoseg21/edgeai-tv/deeplabv3lite_mobilenetv2_cocoseg21_512x512_20210405.onnx'),
postprocess=postproc_transforms.get_transform_segmentation_onnx(),
model_info=dict(metric_reference={'accuracy_mean_iou%':57.77})
),
while some models require prototxt files, as shown below:
'imagedet-5': dict(
task_type='detection',
calibration_dataset=imagedet_calib_dataset,
input_dataset=imagedet_val_dataset,
preprocess=preproc_transforms.get_transform_onnx((512, 512), (512, 512), backend='cv2', reverse_channels=True),
session=sessions.ONNXRTSession(**onnx_bgr_session_cfg,
runtime_options=settings.runtime_options_onnx_p2(
det_options=True, ext_options={'object_detection:meta_arch_type': 3,
'object_detection:meta_layers_names_list': f'{settings.models_path}/vision/detection/coco/edgeai-mmdet/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_model.**prototxt**'
}),
model_path=f'{settings.models_path}/vision/detection/coco/edgeai-mmdet/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_model.onnx'),
postprocess=postproc_transforms.get_transform_detection_mmdet_onnx(squeeze_axis=None,
normalized_detections=False, formatter=postprocess.DetectionBoxSL2BoxLS()),
metric=dict(label_offset_pred=datasets.coco_det_label_offset_80to90(label_offset=1)),
model_info=dict(metric_reference={'accuracy_ap[.5:.95]%': 32.8})
),
When are prototxt files needed and when not needed? AndHow do I generate prototxt files for the Customer model?
I am looking forward for your reponse!
Thanks!
Hello, your YOLOV7 has been deployed successfully or not, because my work is related to this at present, we can communicate with each other. Expect your reply soon! Thanks!
@BJ-ZhaoXiaoyang @debapriyamaji @kumardesappan Please help me explain the questions above if convenience, i will appreciate it very much!
@BJ-ZhaoXiaoyang @debapriyamaji @kumardesappan Please help me explain the questions above if convenience, i will appreciate it very much!
@Onehundred0906 @BJ-ZhaoXiaoyang Have you deployed yolov7?
hello
The link shown as below describes how to use commands "run_custom_pc.sh" and "run_package_artifacts_evm.sh" to compile the custom model and how to modify the benchmark_custom.py and settings_base.yaml https://github.com/TexasInstruments/edgeai-benchmark/blob/master/docs/custom_models.md
I have some questions about this process: 1.How do I set whether the model runs on CPU or GPU or DSP? Is it set on settings_base.yaml file? How should it be set up? 2.How to choose model quantization, and will the quantified model automatically run on the DSP? 3.How should the "target_device" in settings_base.yaml be set up? My TI board is sk-tda4vm. 4.And I change the pipeline_confige in benchmark_custom.py shown as below. I found that the 'object_detection:meta_layers_names_list' in the session of every dist should be filled the proto file. But my yolov7 onnx model does't has the proto file. Is this a must?
5.I modified the benchmark_custom.py and settings_base.yaml and try to run the "run_custom_pc.sh". Then the error occur as below. Could you please help me analyse it?
my settings_base.yaml:
Thanks!:)
BR xiaoyang