xiangweizeng / darknet2ncnn

Darknet2ncnn converts the darknet model to the ncnn model
Do What The F*ck You Want To Public License
158 stars 56 forks source link

Using darknet2ncnn with AlexeyAB/darknet #39

Open BinhBa opened 4 years ago

BinhBa commented 4 years ago

Hi @xiangweizeng,

I would like to ask for your help in using your darknet2ncnn with darknet from AlexeyAB source. I have changed the darknet/ folder with the darknet come from AlexeyAB and some line in Makefile so the source can compile. My problem is when I using the darknet2ncnn to convert a typical network yolov3-tiny-prn: cfg, weight. darknet2ncnn run with no error. But when I used convert_verify to check the network the result showed as follow:

Start run all operation:
conv_0 : weights diff : 0.000000
conv_0_batch_norm : slope diff : 0.000000
conv_0_batch_norm : mean diff : 0.000000
conv_0_batch_norm : variance diff : 0.000000
conv_0_batch_norm : biases diff : 0.000000
Layer: 0, Blob : conv_0_activation, Total Diff 10.852427 Avg Diff: 0.000004
Layer: 1, Blob : maxpool_1, Total Diff 2.860126 Avg Diff: 0.000004
conv_2 : weights diff : 0.000000
conv_2_batch_norm : slope diff : 0.000000
conv_2_batch_norm : mean diff : 0.000000
conv_2_batch_norm : variance diff : 0.000000
conv_2_batch_norm : biases diff : 0.000000
Layer: 2, Blob : conv_2_activation, Total Diff 4.563586 Avg Diff: 0.000003
Layer: 3, Blob : maxpool_3, Total Diff 1.283248 Avg Diff: 0.000004
conv_4 : weights diff : 0.000000
conv_4_batch_norm : slope diff : 0.000000
conv_4_batch_norm : mean diff : 0.000000
conv_4_batch_norm : variance diff : 0.000000
conv_4_batch_norm : biases diff : 0.000000
Layer: 4, Blob : conv_4_activation, Total Diff 2.286713 Avg Diff: 0.000003
Layer: 5, Blob : maxpool_5, Total Diff 0.684587 Avg Diff: 0.000004
conv_6 : weights diff : 0.000000
conv_6_batch_norm : slope diff : 0.000000
conv_6_batch_norm : mean diff : 0.000000
conv_6_batch_norm : variance diff : 0.000000
conv_6_batch_norm : biases diff : 0.000000
Layer: 6, Blob : conv_6_activation, Total Diff 1.056885 Avg Diff: 0.000003
Layer: 7, Blob : maxpool_7, Total Diff 0.327580 Avg Diff: 0.000004
conv_8 : weights diff : 0.000000
conv_8_batch_norm : slope diff : 0.000000
conv_8_batch_norm : mean diff : 0.000000
conv_8_batch_norm : variance diff : 0.000000
conv_8_batch_norm : biases diff : 0.000000
Layer: 8, Blob : conv_8_activation, Total Diff 0.244931 Avg Diff: 0.000001
Layer: 9, Blob : maxpool_9, Total Diff 0.073587 Avg Diff: 0.000002
conv_10 : weights diff : 0.000000
conv_10_batch_norm : slope diff : 0.000000
conv_10_batch_norm : mean diff : 0.000000
conv_10_batch_norm : variance diff : 0.000000
conv_10_batch_norm : biases diff : 0.000000
Layer: 10, Blob : conv_10_activation, Total Diff 0.160212 Avg Diff: 0.000002
Layer: 11, Blob : maxpool_11, Total Diff 0.142812 Avg Diff: 0.000002
conv_12 : weights diff : 0.000000
conv_12_batch_norm : slope diff : 0.000000
conv_12_batch_norm : mean diff : 0.000000
conv_12_batch_norm : variance diff : 0.000000
conv_12_batch_norm : biases diff : 0.000000
Layer: 12, Blob : conv_12_activation, Total Diff 0.523872 Avg Diff: 0.000006
Layer: 13, Blob : shortcut_13_activation, Total Diff 17725.605469 Avg Diff: 0.204854
conv_14 : weights diff : 0.000000
conv_14_batch_norm : slope diff : 0.000000
conv_14_batch_norm : mean diff : 0.000000
conv_14_batch_norm : variance diff : 0.000000
conv_14_batch_norm : biases diff : 0.000000
Layer: 14, Blob : conv_14_activation, Total Diff 4164.033691 Avg Diff: 0.096247
conv_15 : weights diff : 0.000000
conv_15_batch_norm : slope diff : 0.000000
conv_15_batch_norm : mean diff : 0.000000
conv_15_batch_norm : variance diff : 0.000000
conv_15_batch_norm : biases diff : 0.000000
Layer: 15, Blob : conv_15_activation, Total Diff 24017.382812 Avg Diff: 0.555135
Layer: 16, Blob : shortcut_16_activation, Total Diff 28946.251953 Avg Diff: 0.669061
conv_17 : weights diff : 0.000000
conv_17 : biases diff : 0.000000
Layer: 17, Blob : conv_17_activation, Total Diff 296630.531250 Avg Diff: 6.883177
Layer: 19, Blob : route_19, Total Diff 24017.382812 Avg Diff: 0.555135
conv_20 : weights diff : 0.000000
conv_20_batch_norm : slope diff : 0.000000
conv_20_batch_norm : mean diff : 0.000000
conv_20_batch_norm : variance diff : 0.000000
conv_20_batch_norm : biases diff : 0.000000
Layer: 20, Blob : conv_20_activation, Total Diff 8507.630859 Avg Diff: 0.393289
Layer: 21, Blob : upsample_21, Total Diff 34030.519531 Avg Diff: 0.393289
Layer: 22, Blob : shortcut_22_activation, Total Diff 34487.667969 Avg Diff: 0.398572
conv_23 : weights diff : 0.000000
conv_23_batch_norm : slope diff : 0.000000
conv_23_batch_norm : mean diff : 0.000000
conv_23_batch_norm : variance diff : 0.000000
conv_23_batch_norm : biases diff : 0.000000
Layer: 23, Blob : conv_23_activation, Total Diff 84448.117188 Avg Diff: 0.975963
Layer: 24, Blob : shortcut_24_activation, Total Diff 92290.070312 Avg Diff: 1.066592
Layer: 25, Blob : shortcut_25_activation, Total Diff 93145.468750 Avg Diff: 1.076478
conv_26 : weights diff : 0.000000
conv_26 : biases diff : 0.000000
Layer: 26, Blob : conv_26_activation, Total Diff 1472609.375000 Avg Diff: 8.542809

As you can see Total Diff is very high and the result when running inference is very bad. I wonder what is the problem here and what I can do?

Thank you,