TexasInstruments / edgeai-mmdetection

This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
https://github.com/TexasInstruments/edgeai
Other
0 stars 0 forks source link

Run mmdetect demo code and get incorrect results #10

Open Patrick-Woo opened 2 years ago

Patrick-Woo commented 2 years ago

I am new to mmdetection and TI edgeai mmdtection.

After installing this repository, I try to run some demo codes from original mmdrection repo to perform inference on a demo picture .

Unfortunately, the demo code below get the incorrect results on the demo picture. The bboxes are not on the right places and the classes are totally wrong.

Could you please be so kind to tell me how to perform inference and get the correct bbox with your edgeai mmdtection repository?

* the demo code is below*** from mmdet.apis import init_detector, inference_detector import mmcv config_file = './configs/edgeailite/ssd/ssd_regnet_fpn_bgr_lite.py' checkpoint_file = './checkpoints/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_checkpoint.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0') img = 'demo/demo.jpg'
result = inference_detector(model, img) model.show_result(img, result) model.show_result(img, result, out_file='result.jpg') * the demo code is above**

BTY, after running the ./run_detection_test.sh according the usage guide, I could get the right result with content of "mmdet - INFO - OrderedDict([('bbox_mAP', 0.328), ('bbox_mAP_50', 0.528).........."

Patrick-Woo commented 2 years ago

The result picture after runing ssd_regnetx-800mf_fpn_bgr_lite inference is below: result.jpg

mathmanu commented 2 years ago

Does it look like too many boxes being shown due to a low detection threshold? If so, you can increase the detection threshold (only for demo testing purposes, not for training).

A value of 0.3 is appropriate for visualization, as given in this example: https://github.com/TexasInstruments/edgeai-mmdetection/blob/master/tools/deployment/test.py#L42 https://github.com/TexasInstruments/edgeai-mmdetection/blob/master/tools/deployment/test.py#L122

One way to change that is to change directly in the config file: https://github.com/TexasInstruments/edgeai-mmdetection/blob/master/configs/edgeailite/_xbase_/hyper_params/ssd_config.py#L19

Or you can give the params to be changed in the options passed to init_detector: init_detector(config_file, checkpoint_file, device='cuda:0', model=dict(test_cfg=dict(score_thr=0.3)))

Patrick-Woo commented 2 years ago

Hi @mathmanu , Thank you for your prompt reply.

  1. change score_thr to 0.8

After even increasing "score_thr=0.8" in the https://github.com/TexasInstruments/edgeai-mmdetection/blob/master/configs/edgeailite/_xbase_/hyper_params/ssd_config.py#L19, I still get the wrong BBOX and classes.

And when loading Ti model zoo's model (ssd_regnetx-800mf_fpn) , I get the warnings: " load checkpoint from local path: ../checkpoints/ssd_regnetx-800mf_fpn_bgr_lite_512x512_20200919_checkpoint.pth

The model and loaded state dict do not match exactly unexpected key in source state_dict: neck.fpn_convs.0.conv.0.0.weight,neck.fpn_convs.0.conv.0.1.weight,..... "

The demo jupyter notebook with wanrings and incorrect result is below: inference_demo.ipynb

The result picture I get is as follows: result1.jpg

I will be so appreciated if you could help me tackle this issue.

  1. using original mmdetection model(faster rcnn) and config .py file in the same notebook can get the correct bbox and class without model loading warning.

original_mmdet_result.ipynb

  1. I have tested several models with the demo jupyter code and get the results below: model_test_result.png

  2. My final goal is to perform fine tune(transfer learning) on a new dataset with your TI's lite pre-trained model and then do QAT, export QAT model to ONNX , transfer onnx to artifacts which can be understood by TIDL sdk.

    I have only found the intro of training from scratch in this repo, which cannot fulfill my requirement.

    I will be so appreciated if you could give me a guide of how to perform transfer learning with say TI's ssd_mobilenet_lite model, just like the mmdetecion transfer learning guide?

lincaiming commented 2 years ago

I'm having the same problem! because the model needs to be converted,You can add this code if hasattr(config, 'convert_to_lite_model') and config.convert_to_lite_model not in (False, None): model = convert_to_lite_model(model, config)