HornedSungem / SungemSDK

Horned Sungem
http://www.hornedsungem.org/
Apache License 2.0
33 stars 14 forks source link

model conversion issue #22

Closed RamatovInomjon closed 5 years ago

RamatovInomjon commented 5 years ago

Fusing depthconv and conv in depthwise_conv2d_1 and conv2d_4 Traceback (most recent call last): File "mvNCCheck.py", line 152, in quit_code = check_net(args.network, args.image, args.inputnode, args.outputnode, args.nshaves, args.inputsize, args.weights, args) File "mvNCCheck.py", line 127, in check_net net = parse_caffe(args, myriad_config, file_gen=True) File "/home/inomjon/Projects/Movidius/Sungem/SungemSDK-Python/tool/Controllers/CaffeParser.py", line 1387, in parse_caffe network.attach(node) File "/home/inomjon/Projects/Movidius/Sungem/SungemSDK-Python/tool/Models/Network.py", line 81, in attach stage.attach_several(appropriate_nodes) File "/home/inomjon/Projects/Movidius/Sungem/SungemSDK-Python/tool/Models/NetworkStage.py", line 689, in attach_several parents.attach(self) File "/home/inomjon/Projects/Movidius/Sungem/SungemSDK-Python/tool/Models/NetworkStage.py", line 412, in attach taps[c,c*multiplier+i,y,x] = self.taps[y,x,c,i] IndexError: index 3 is out of bounds for axis 2 with size 3

shaomang commented 5 years ago

you can locate the layer that caused problem by: python3 mvNCCheck.py .... -on "your layer name"

there are some layers may not be supported, or need some modification on prototxt

RamatovInomjon commented 5 years ago

you can locate the layer that caused problem by: python3 mvNCCheck.py .... -on "your layer name"

there are some layers may not be supported, or need some modification on prototxt

Hi @shaomang , thanks for your reply, but I am failed on conversion again, the same error is occurred on conv2d_4's layer, I have no idea to solve this

shaomang commented 5 years ago

Could you copy paste the part of prototxt?

RamatovInomjon commented 5 years ago

layer { name: "data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 1 dim: 64 dim: 64 } } } layer { name: "conv2d_1" type: "Convolution" bottom: "data" top: "conv2d_1" convolution_param { num_output: 8 bias_term: false group: 1 stride: 1 pad_h: 0 pad_w: 0 kernel_h: 3 kernel_w: 3 } } layer { name: "batch_normalization_1" type: "BatchNorm" bottom: "conv2d_1" top: "batch_normalization_1" batch_norm_param { use_global_stats: true eps: 0.0010000000475 } } layer { name: "batch_normalization_1_scale" type: "Scale" bottom: "batch_normalization_1" top: "batch_normalization_1" scale_param { bias_term: true } } layer { name: "activation_1" type: "ReLU" bottom: "batch_normalization_1" top: "batch_normalization_1" } layer { name: "conv2d_2" type: "Convolution" bottom: "batch_normalization_1" top: "conv2d_2" convolution_param { num_output: 8 bias_term: false group: 1 stride: 1 pad_h: 0 pad_w: 0 kernel_h: 3 kernel_w: 3 } } layer { name: "batch_normalization_2" type: "BatchNorm" bottom: "conv2d_2" top: "batch_normalization_2" batch_norm_param { use_global_stats: true eps: 0.0010000000475 } } layer { name: "batch_normalization_2_scale" type: "Scale" bottom: "batch_normalization_2" top: "batch_normalization_2" scale_param { bias_term: true } } layer { name: "activation_2" type: "ReLU" bottom: "batch_normalization_2" top: "batch_normalization_2" } layer { name: "depthwise_conv2d_1" type: "Convolution" bottom: "batch_normalization_2" top: "depthwise_conv2d_1" convolution_param { num_output: 8 bias_term: false group: 8 stride: 1 pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 } } layer { name: "conv2d_3" type: "Convolution" bottom: "batch_normalization_2" top: "conv2d_3" convolution_param { num_output: 16 bias_term: false group: 1 stride: 2 pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 } } layer { name: "conv2d_4" type: "Convolution" bottom: "depthwise_conv2d_1" top: "conv2d_4" convolution_param { num_output: 16 bias_term: false group: 1 stride: 1 pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 } } in this layer I am getting error @shaomang

RamatovInomjon commented 5 years ago

Could you copy paste the part of prototxt?

} layer { name: "conv2d_14" type: "Convolution" bottom: "depthwise_conv2d_8" top: "conv2d_14" convolution_param { num_output: 128 bias_term: false group: 1 stride: 1 pad_h: 0 pad_w: 0 kernel_h: 1 kernel_w: 1 } } layer { name: "batch_normalization_14" type: "BatchNorm" bottom: "conv2d_14" top: "batch_normalization_14" batch_norm_param { use_global_stats: true eps: 0.0010000000475 } } layer { name: "batch_normalization_14_scale" type: "Scale" bottom: "batch_normalization_14" top: "batch_normalization_14" scale_param { bias_term: true } } layer { name: "max_pooling2d_4" type: "Pooling" bottom: "batch_normalization_14" top: "max_pooling2d_4" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad_h: 0 pad_w: 0 } } layer { name: "add_4" type: "Eltwise" bottom: "batch_normalization_12" bottom: "max_pooling2d_4" top: "add_4" eltwise_param { operation: SUM } } layer { name: "conv2d_15" type: "Convolution" bottom: "add_4" top: "conv2d_15" convolution_param { num_output: 8 bias_term: true group: 1 stride: 1 pad_h: 1 pad_w: 1 kernel_h: 3 kernel_w: 3 } } layer { name: "global_average_pooling2d_1" type: "Pooling" bottom: "conv2d_15" top: "global_average_pooling2d_1" pooling_param { pool: AVE stride: 1 global_pooling: true } } layer { name: "dense_1" type: "InnerProduct" bottom: "global_average_pooling2d_1" top: "dense_1" inner_product_param { num_output: 2 bias_term: true } } layer { name: "predictions" type: "Softmax" bottom: "dense_1" top: "predictions" }

shaomang commented 5 years ago

It seems connecting depthwise conv layer with normal conv layer will cause problem. It might be better to reconstruct the prototxt first and check with the tool, then train again.

*If you delete the group:8 in the depthwise layer (makes it a normal conv layer), that layer will pass the check.