Closed SenaChanghoon closed 1 year ago
I see that you are giving: 'segmentation:meta_layers_names_list' Is that a typo?
Have a look at some of the examples here? https://github.com/TexasInstruments/edgeai-benchmark/blob/master/configs/detection.py#L85
No it wasn't a typo.
As far as I know 'object_detection:meta_layers_names_list'
is used for detection models however my model is a segmentation model. So I wrote it like 'segmentation:meta_layers_names_list'
Anyway I modified it like bellow
`'a1-seg': dict(
task_type='segmentation',
calibration_dataset=imageseg_calib_dataset,
input_dataset=imageseg_val_dataset,
preprocess=preproc_transforms.get_transform_jai((512,512), (512,512), backend='cv2', interpolation=cv2.INTER_LINEAR),
session=sessions.ONNXRTSession(**jai_session_cfg,
runtime_options=settings.runtime_options_onnx_np2(
det_options=True, ext_options={
'object_detection:meta_arch_type':3,
'object_detection:meta_layers_names_list': '/opt/beagle/edgeai-benchmark/custom_data/230607_model.prototxt'}),
model_path= '/opt/beagle/edgeai-benchmark/custom_data/230607_model.onnx'),
postprocess=postproc_transforms.get_transform_segmentation_onnx(),
model_info=dict(metric_reference={'accuracy_mean_iou%':57.77})
),`
and the result was not understandable when I upload in to the beagle board.
it is working right but it doesn't mask the object I've trained, further more it actually doesn't masks anything the result the pic bellow, I covered the camera Talking in advance my camera is functioning well, it's the model which makes it looks like malfunctioning.
Do I have to change the 'object_detection:meta_arch_type'
into a different number?
or is the process (training a segmentation model with edgeai-mmdetection and then compile the model with edgeai-benchmark / when compiling i used
dataset_calib_cfg = dict(
path=f'{settings.datasets_path}/coco-seg21-converted/val2017',
split=f'{settings.datasets_path}/coco-seg21-converted/val2017.txt',
num_classes=21,
shuffle=True,
num_frames=min(settings.calibration_frames,5000),
name='cocoseg21'
)
do I need to change the dataset into the the dataset I used for training?) wrong? or segmentation model is not supported yet? or do I have to wait until edgeai-modelmaker supports segmentation model?
Segmentation models don't need any meta_arch_type. And you said you made the custom model using edgeai-mmdetection - but that is for object detection training.
Segmentation model is qutie straight forward - it just end in an ArgMax at the end. Have a look at the model here: https://github.com/TexasInstruments/edgeai-modelzoo/tree/master/models/vision/segmentation/cocoseg21/edgeai-tv
See the custom segemtnation model compilation here: https://github.com/TexasInstruments/edgeai-benchmark/blob/master/scripts/benchmark_custom.py#L193
Oh..... I got edgeai-mmdetection wrong.
I though it had all features of https://github.com/open-mmlab/mmdetection.
Anyway I had issues while compiling custom model trained via https://github.com/open-mmlab/mmdetection like bellow so I used edgeai-mmdetection
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (Reshape_680) Op (Reshape) [ShapeInferenceError] Invalid Target shape product of 0
[ONNXRuntimeError] : 1 : FAIL : Node (Reshape_680) Op (Reshape) [ShapeInferenceError] Invalid Target shape product of 0
Is there a way to train and compile by using the repositories which TexasInstruments has to make a custom segmentation model?
Sorry, I did not understand your last statement. Could you please elaborate? Were you able to run the recommended models from edgeai-mmdetection?
For segmentation, you have to use another repository - not mmdetection or edgeai-mmdetection
For segmentation, have a look at some fo the segmentation model training scripts that we have in edgeai-torchvision: https://github.com/TexasInstruments/edgeai-torchvision/blob/master/docs/pixel2pixel/Semantic_Segmentation.md
Thank you, you've got my intentions right. I'll leave more comments if I have face some issues I can't solve.
btw does edgeai-modelmaker supports training segmentation model with custom dataset??
Yes, https://github.com/TexasInstruments/edgeai-modelmaker supports segmentation model training/compilation with a custom dataset. Try it and let me know.
Hi, again.
There's an error while using /opt/edgeai-torchvision/references/edgeailite/engine/train_pixel2pixel.py
it's not a fatal error so I change the input format like bellow
progress_bar.set_postfix(Epoch=epoch_str, LR=lr, DataTime=str(data_time), LossMult=multi_task_factors_print, Loss=avg_loss, Output=output_string)
progress_bar.set_postfix({'Epoch':epoch_str, 'LR':lr, 'DataTime':data_time, 'LossMult':multi_task_factors_print, 'Loss':avg_loss, 'Output':output_string})
now I want to use custom data to train my model.\ do I need to make the custom dataset like tiscape_QAT_segmentation dataset? tiscape_QAT_segmentation dataset looks like below tiscape_QAT_segmentation |- dataset |-annotations |-images |-polygons |-train |-val
You just need the images and the instances.json in the annotations folder:
|-annotations
|-images
Thank you.
you helped a lot👍
I'll close this issue
Hi, I faced a problem while using benckmark_custom.py
I made a custom model by using https://github.com/TexasInstruments/edgeai-mmdetection and got a custom onnx model and made a pipeline_configs like below
However I got this error
I thought the reshape error occured because of
so I change the pipeline_configs like bellow
But still I get the same error
what should I do?
Is the onnx model from edgeai-mmdetection is wrong in the first place? or do I need to modify some scripts on edgeai-benchmark?