TexasInstruments / edgeai-tensorlab

Edge AI Model Development Tools
https://github.com/TexasInstruments/edgeai
Other
22 stars 4 forks source link

[INFORMATION] mmyolo - Model Surgery using edgeai-modeloptimization - to create lite models #7

Open mathmanu opened 1 month ago

mathmanu commented 1 month ago

Introduction

mmyolo (https://github.com/open-mmlab/mmyolo) is a repository that has several interesting Object Detection models. For example, it includes models such as YOLOv5, YOLOv7, YOLOX, YOLOv8 etc.

Here we describe how to apply Model Surgery on mmyolo to create lite models that run faster on Embedded Systems.

Background - What actually happens in Model Surgery

The types of Operators/Layers that are being used in popular models are increasing rapidly. All of them may not work efficiently in embedded devices. For example, a ReLU activation layer is much faster compared to a SWish activation layer - because ReLU operator is implemented in Hardware at fullest speed (because of the simplicity of ReLU operation). This is just an example. There are several such examples.

In many cases it is possible to replace in-efficient layers with their efficient alternatives without actually modifying the code. It is done by modifying the Python model after the model has been instantiated.

How to use edgeai-modeloptimization

edgeai-modeloptimization (https://github.com/TexasInstruments/edgeai-tensorlab/tree/main/edgeai-modeloptimization) is a package that can automate some of the Model Surgery aspects.

It provides edgeai_torchmodelopt, a python pakage that helps to modify PyTorch models without manually editing the model code.

The exact location is here: https://github.com/TexasInstruments/edgeai-tensorlab/tree/main/edgeai-modeloptimization/torchmodelopt

It provides various types of model surgery options as described here: https://github.com/TexasInstruments/edgeai-tensorlab/blob/main/edgeai-modeloptimization/torchmodelopt/docs/surgery.md

Patch file

The commit id of mmyolo (https://github.com/open-mmlab/mmyolo) for this explanation is: 8c4d9dc503dc8e327bec8147e8dc97124052f693

This patch file includes above modification in train.py and other modifications in val.py, prototxt export etc. 0001-2024-Aug-2-mmyolo.commit-8c4d9dc5.-model-surgery-with-edgeai-modeloptimization.txt

Patching mmyolo:

git clone https://github.com/open-mmlab/mmyolo.git
git checkout 8c4d9dc5
git am 0001-mmyolo.commit-8c4d9dc5.-model-surgery-with-edgeai-modeloptimization.txt

Run training:

python3 tools/train.py <configfile> --model-surgery 1

You can also use tools/dist_train.sh (just make sure that --model-surgery 1 argument is passed inside it)

Expected Accuracy

This table shows expected model accuracy of Lite models after training.

Dataset Original Model Lite Model Input Size Original AP[0.5:0.95]%, AP50% Lite AP[0.5:0.95]%, AP50% GigaMACS config file Notes
YOLOv5 models
COCO YOLOv5-nano YOLOv5-nano-lite 640x640 28.0, 45.9 25.2, 42.1 2.07 configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py
COCO YOLOv5-small YOLOv5-small-lite 640x640 37.7, 57.1 35.5, 54.7 7.89 configs/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py
YOLOv7 models
COCO YOLOv7-tiny YOLOv7-tiny-lite 640x640 37.5, 55.8 36.7, 55.0 6.87 configs/yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco.py
COCO YOLOv7-large YOLOv7-large-lite 640x640 51.0, 69.0 48.1, 66.4 52.95 configs/yolov7/yolov7_l_syncbn_fast_8x16b-300e_coco.py
YOLOv8 models
COCO YOLOv8-nano YOLOv8-nano-lite 640x640 37.2, 52.7 34.5, 49.7 - configs/yolov8/yolov8_n_syncbn_fast_8xb16-500e_coco.py
COCO YOLOv8-small YOLOv8-small-lite 640x640 44.2, 61.0 42.4, 58.8 14.33 configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py
YOLOX models
COCO YOLOX-tiny YOLOX-tiny-lite 416x416 32.7, 50.3 31.1, 48.4 3.25 configs/yolox/yolox_tiny_fast_8xb8-300e_coco.py
COCO YOLOX-small YOLOX-small-lite 640x640 40.7, 59.6 38.7, 57.4 7.85 configs/yolox/yolox_s_fast_8xb8-300e_coco.py

Notes

Additional information

Additional information about the details of the modifications done using Model Surgery is here: https://github.com/TexasInstruments/edgeai-yolov5

mathmanu commented 1 month ago

How Model Surgery is actually done

This is for information only - the above patch already includes these changes.

The patch adds the following code in mmyolo repository in tools/train.py. similarly tools/test.py is also modified to include model surgery.

from edgeai_torchmodelopt import xmodelopt

Add this in parse_args function:

    parser.add_argument('--model-surgery', type=int, default=0)

Add the following code in mmyolo repository in tools/train.py before the line runner.train()

if args.model_surgery:
    surgery_fn = xmodelopt.surgery.v1.convert_to_lite_model if args.model_surgery == 1 \
                 else (xmodelopt.surgery.v2.convert_to_lite_fx if args.model_surgery == 2 else None)

    runner._init_model_weights()
    if is_model_wrapper(runner.model):
        runner.model = runner.model.module
    runner.model.backbone = surgery_fn(runner.model.backbone)
    runner.model.neck = surgery_fn(runner.model.neck)
    # Only head_module of head goes through model_surgery as it contains all compute layers
    if not isinstance(runner.model.bbox_head.head_module, (YOLOv5HeadModule, YOLOv7HeadModule, YOLOv8HeadModule, YOLOv6HeadModule)):
        if hasattr(runner.model.bbox_head.head_module, 'reg_max'):
            reg_max = runner.model.bbox_head.head_module.reg_max
        else:
            reg_max = None
        runner.model.bbox_head.head_module = \
            surgery_fn(runner.model.bbox_head.head_module)
        if reg_max is not None:
            runner.model.bbox_head.head_module.reg_max = reg_max
    elif isinstance(runner.model.bbox_head.head_module, (YOLOv8HeadModule, YOLOv6HeadModule)):
        runner.model.bbox_head.head_module = xmodelopt.surgery.v1.convert_to_lite_model(runner.model.bbox_head.head_module)
    runner.model = runner.wrap_model(runner.cfg.get('model_wrapper_cfg'), runner.model)
print("\n\n model summary : \n",runner.model)