VeriSilicon / acuity-models

Acuity Model Zoo
https://verisilicon.github.io/acuity-models/
134 stars 23 forks source link

[ACUITY-TOOLKIT] Error while converting caffe SSD #2

Closed d-lareg closed 4 years ago

d-lareg commented 4 years ago

Hello,

I use the acuity-toolkit provided by Khadas (for the Vim3) to convert a Caffe based SSD network. To get in touch with the conversion/quantization mechanics I try to convert the Coco 300x300 SSD trained by weiliu. After upgrading the prototxt I was (more or less) able to convert the model by executing the 0_import_model.sh. During the import I recognized that the detection layer was dropped:

....
D Load blobs of conv8_2_mbox_conf
D Load blobs of conv9_2_mbox_loc
D Load blobs of conv9_2_mbox_conf
D Load blobs of detection_out <==============================
I Load blobs complete.
I Start C2T Switcher... 
D Optimizing network with broadcast_op
D convert conv4_3_norm_52(l2normalizescale) l2n_dim [1] to [3]
D convert mbox_priorbox_97(concat) axis 2 to 1
....
D remove permute conv9_2_mbox_conf_perm_92_acuity_mark_perm_114
D remove permute conv9_2_mbox_conf_perm_92
I End C2T Switcher...
D Remove detection_out_101.<================================
D Optimizing network with force_1d_tensor, swapper, merge_layer, auto_fill_bn, 
auto_fill_l2normalizescale, resize_nearest_transformer, auto_fill_multiply, merge_avgpool_conv1x1, 
auto_fill_zero_bias, proposal_opt_import
I End importing caffe...
I Dump net to ssd.json
I Save net to ssd.data
W ----------------Warning(3)----------------

The layer is also missing in the resulting json file:

    "mbox_conf_flatten_100": {
        "name": "mbox_conf_flatten",
        "op": "flatten",
        "parameters": {
            "axis": 1
        },
        "inputs": [
            "@mbox_conf_softmax_99:out0"
        ],
        "outputs": [
            "out0"
        ]
    },
    "output_102": {
        "name": "output",
        "op": "output",
        "inputs": [
            "@detection_out_101:out0"
        ],
        "outputs": [
            "out0"
        ]
    },
    "detection_out_101_acuity_mark_perm_115": {
        "name": "detection_out_101_acuity_mark_perm",
        "op": "permute",
        "parameters": {
            "perm": "0 3 1 2"
        },
        "inputs": [
            "@mbox_priorbox_97:out0"
        ],
        "outputs": [
            "out0"
        ]
    }

When I execute the the 1_quantize_model.sh I get the following error message:

....
D Load layer mbox_loc_95 ...
D Load layer mbox_conf_96 ...
D Load layer mbox_priorbox_97 ...
D Load layer mbox_conf_reshape_98 ...
D Load layer mbox_conf_softmax_99 ...
D Load layer mbox_conf_flatten_100 ...
D Load layer output_102 ...
D Load layer detection_out_101_acuity_mark_perm_115 ...
E Unsuport input tensor type "None" of layer "output_102".
W ----------------Warning(1)----------------
Traceback (most recent call last):
  File "tensorzonex.py", line 446, in <module>
  File "tensorzonex.py", line 379, in main
  File "acuitylib/app/tensorzone/workspace.py", line 223, in load_net
  File "acuitylib/app/tensorzone/graph.py", line 26, in load_net
  File "acuitylib/acuitynet.py", line 441, in load
  File "acuitylib/acuitynet.py", line 474, in loads
  File "acuitylib/layer/acuitylayer.py", line 146, in add_input
  File "acuitylib/acuitylog.py", line 251, in e
ValueError: Unsuport input tensor type "None" of layer "output_102".    <=========
      [28306] Failed to execute script tensorzonex

Is this a bug or are Caffe SSD models not supported?

thezha commented 4 years ago

Post Detection layer is not supported, such post-processing layer usually runs on CPU. In the specific case you are showing, it seems that there are some issues related to automatically removing the post-detection layer. What is the version of Acuity toolkit you are using?

Please try manually remove the post detection layer in the prototxt file, and re-run import process to workaround the problem.

layer {
  name: "detection_out"
  ...
}

Thanks!

d-lareg commented 4 years ago

Thanks for your support! I use version 5.7.0.

When I remove the detection layer it works. I get three outputs for class confidence, location and the prior box tensor. What makes me wonder is that the prior box layer is not part of your SSD network graph. I guess its bad idea to quantize the priors? Is this the reason why these layers are missing?

thezha commented 4 years ago

For inference with a graph frozen, prior boxes are fixed so it can be hard coded in the post-processing code. Acuity will generate a priorbox.bin file which can be used in post-process implementation.