ceccocats / tkDNN

Deep neural network library and toolkit to do high performace inference on NVIDIA jetson platforms
GNU General Public License v2.0
718 stars 209 forks source link

I want to convert a darknet model trained on 4 channels to an rt file #178

Closed Hiroaki-K4 closed 3 years ago

Hiroaki-K4 commented 3 years ago

Hi. I want to convert a darknet model trained on 4 channels to an rt file. I modified the inside of darknet to train the model in 4 channels with rgb plus depth. So I tried exporting the model in the link repository, but a segmentation fault occurred. https://git.hipert.unimore.it/fgatti/darknet Where should I change? terminal message is below.

GPU isn't used 
 OpenCV isn't used - data augmentation will be slow 
mini_batch = 1, batch = 64, time_steps = 1, train = 1 
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 1    512 x 512 x   4 ->  512 x 512 x  32 0.604 BF
   1 conv     64       3 x 3/ 2    512 x 512 x  32 ->  256 x 256 x  64 2.416 BF
   2 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   3 route  1                                  ->  256 x 256 x  64 
   4 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   5 conv     32       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  32 0.268 BF
   6 conv     64       3 x 3/ 1    256 x 256 x  32 ->  256 x 256 x  64 2.416 BF
   7 Shortcut Layer: 4,  wt = 0, wn = 0, outputs: 256 x 256 x  64 0.004 BF
   8 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   9 route  8 2                                ->  256 x 256 x 128 
  10 conv     64       1 x 1/ 1    256 x 256 x 128 ->  256 x 256 x  64 1.074 BF
  11 conv    128       3 x 3/ 2    256 x 256 x  64 ->  128 x 128 x 128 2.416 BF
  12 conv     64       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x  64 0.268 BF
  13 route  11                                 ->  128 x 128 x 128 
  14 conv     64       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x  64 0.268 BF
  15 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  16 conv     64       3 x 3/ 1    128 x 128 x  64 ->  128 x 128 x  64 1.208 BF
  17 Shortcut Layer: 14,  wt = 0, wn = 0, outputs: 128 x 128 x  64 0.001 BF
  18 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  19 conv     64       3 x 3/ 1    128 x 128 x  64 ->  128 x 128 x  64 1.208 BF
  20 Shortcut Layer: 17,  wt = 0, wn = 0, outputs: 128 x 128 x  64 0.001 BF
  21 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  22 route  21 12                              ->  128 x 128 x 128 
  23 conv    128       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x 128 0.537 BF
  24 conv    256       3 x 3/ 2    128 x 128 x 128 ->   64 x  64 x 256 2.416 BF
  25 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
  26 route  24                                 ->   64 x  64 x 256 
  27 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
  28 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  29 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  31 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  32 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  34 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  35 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  37 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  38 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  39 Shortcut Layer: 36,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  40 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  41 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  42 Shortcut Layer: 39,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  43 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  44 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  45 Shortcut Layer: 42,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  46 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  47 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  48 Shortcut Layer: 45,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  49 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  50 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  51 Shortcut Layer: 48,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  52 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  53 route  52 25                              ->   64 x  64 x 256 
  54 conv    256       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 256 0.537 BF
  55 conv    512       3 x 3/ 2     64 x  64 x 256 ->   32 x  32 x 512 2.416 BF
  56 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
  57 route  55                                 ->   32 x  32 x 512 
  58 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
  59 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  60 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  61 Shortcut Layer: 58,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  62 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  63 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  64 Shortcut Layer: 61,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  65 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  66 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  67 Shortcut Layer: 64,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  68 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  69 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  70 Shortcut Layer: 67,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  71 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  72 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  73 Shortcut Layer: 70,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  74 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  75 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  76 Shortcut Layer: 73,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  77 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  78 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  79 Shortcut Layer: 76,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  80 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  81 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  82 Shortcut Layer: 79,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  83 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  84 route  83 56                              ->   32 x  32 x 512 
  85 conv    512       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 512 0.537 BF
  86 conv   1024       3 x 3/ 2     32 x  32 x 512 ->   16 x  16 x1024 2.416 BF
  87 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
  88 route  86                                 ->   16 x  16 x1024 
  89 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
  90 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  91 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  92 Shortcut Layer: 89,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  93 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  94 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  95 Shortcut Layer: 92,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  96 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  97 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  98 Shortcut Layer: 95,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  99 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
 100 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
 101 Shortcut Layer: 98,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
 102 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
 103 route  102 87                             ->   16 x  16 x1024 
 104 conv   1024       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x1024 0.537 BF
 105 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 106 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 107 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 108 max                5x 5/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.003 BF
 109 route  107                                    ->   16 x  16 x 512 
 110 max                9x 9/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.011 BF
 111 route  107                                    ->   16 x  16 x 512 
 112 max               13x13/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.022 BF
 113 route  112 110 108 107                        ->   16 x  16 x2048 
 114 conv    512       1 x 1/ 1     16 x  16 x2048 ->   16 x  16 x 512 0.537 BF
 115 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 116 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 117 conv    256       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 256 0.067 BF
 118 upsample                 2x    16 x  16 x 256 ->   32 x  32 x 256
 119 route  85                                 ->   32 x  32 x 512 
 120 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 121 route  120 118                                ->   32 x  32 x 512 
 122 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 123 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 124 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 125 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 126 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 127 conv    128       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 128 0.067 BF
 128 upsample                 2x    32 x  32 x 128 ->   64 x  64 x 128
 129 route  54                                 ->   64 x  64 x 256 
 130 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 131 route  130 128                                ->   64 x  64 x 256 
 132 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 133 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 134 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 135 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 136 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 137 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 138 conv     54       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x  54 0.113 BF
 139 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.20
nms_kind: greedynms (1), beta = 0.600000 
 140 route  136                                    ->   64 x  64 x 128 
 141 conv    256       3 x 3/ 2     64 x  64 x 128 ->   32 x  32 x 256 0.604 BF
 142 route  141 126                                ->   32 x  32 x 512 
 143 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 144 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 145 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 146 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 147 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 148 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 149 conv     54       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x  54 0.057 BF
 150 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000 
 151 route  147                                    ->   32 x  32 x 256 
 152 conv    512       3 x 3/ 2     32 x  32 x 256 ->   16 x  16 x 512 0.604 BF
 153 route  152 116                                ->   16 x  16 x1024 
 154 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 155 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 156 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 157 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 158 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 159 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 160 conv     54       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x  54 0.028 BF
 161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000 
Total BFLOPS 90.509 
avg_outputs = 744302 
Loading weights from /home/nvidia/yolov4-custom_last.weights...
 seen 64, trained: 51 K-images (0 Kilo-batches_64) 
Done! Loaded 162 layers from weights-file 
n: 0, type 0
Convolutional
weights: 1152, biases: 32, batch_normalize: 1, groups: 1
write binary layers/c0.bin

n: 1, type 0
Convolutional
weights: 18432, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c1.bin

n: 2, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c2.bin

n: 3, type 9
export ROUTE

n: 4, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c4.bin

n: 5, type 0
Convolutional
weights: 2048, biases: 32, batch_normalize: 1, groups: 1
write binary layers/c5.bin

n: 6, type 0
Convolutional
weights: 18432, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c6.bin

n: 7, type 14
export SHORTCUT

n: 8, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c8.bin

n: 9, type 9
export ROUTE

n: 10, type 0
Convolutional
weights: 8192, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c10.bin

n: 11, type 0
Convolutional
weights: 73728, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c11.bin

n: 12, type 0
Convolutional
weights: 8192, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c12.bin

n: 13, type 9
export ROUTE

n: 14, type 0
Convolutional
weights: 8192, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c14.bin

n: 15, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c15.bin

n: 16, type 0
Convolutional
weights: 36864, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c16.bin

n: 17, type 14
export SHORTCUT

n: 18, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c18.bin

n: 19, type 0
Convolutional
weights: 36864, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c19.bin

n: 20, type 14
export SHORTCUT

n: 21, type 0
Convolutional
weights: 4096, biases: 64, batch_normalize: 1, groups: 1
write binary layers/c21.bin

n: 22, type 9
export ROUTE

n: 23, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c23.bin

n: 24, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c24.bin

n: 25, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c25.bin

n: 26, type 9
export ROUTE

n: 27, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c27.bin

n: 28, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c28.bin

n: 29, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c29.bin

n: 30, type 14
export SHORTCUT

n: 31, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c31.bin

n: 32, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c32.bin

n: 33, type 14
export SHORTCUT

n: 34, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c34.bin

n: 35, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c35.bin

n: 36, type 14
export SHORTCUT

n: 37, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c37.bin

n: 38, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c38.bin

n: 39, type 14
export SHORTCUT

n: 40, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c40.bin

n: 41, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c41.bin

n: 42, type 14
export SHORTCUT

n: 43, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c43.bin

n: 44, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c44.bin

n: 45, type 14
export SHORTCUT

n: 46, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c46.bin

n: 47, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c47.bin

n: 48, type 14
export SHORTCUT

n: 49, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c49.bin

n: 50, type 0
Convolutional
weights: 147456, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c50.bin

n: 51, type 14
export SHORTCUT

n: 52, type 0
Convolutional
weights: 16384, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c52.bin

n: 53, type 9
export ROUTE

n: 54, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c54.bin

n: 55, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c55.bin

n: 56, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c56.bin

n: 57, type 9
export ROUTE

n: 58, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c58.bin

n: 59, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c59.bin

n: 60, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c60.bin

n: 61, type 14
export SHORTCUT

n: 62, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c62.bin

n: 63, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c63.bin

n: 64, type 14
export SHORTCUT

n: 65, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c65.bin

n: 66, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c66.bin

n: 67, type 14
export SHORTCUT

n: 68, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c68.bin

n: 69, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c69.bin

n: 70, type 14
export SHORTCUT

n: 71, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c71.bin

n: 72, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c72.bin

n: 73, type 14
export SHORTCUT

n: 74, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c74.bin

n: 75, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c75.bin

n: 76, type 14
export SHORTCUT

n: 77, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c77.bin

n: 78, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c78.bin

n: 79, type 14
export SHORTCUT

n: 80, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c80.bin

n: 81, type 0
Convolutional
weights: 589824, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c81.bin

n: 82, type 14
export SHORTCUT

n: 83, type 0
Convolutional
weights: 65536, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c83.bin

n: 84, type 9
export ROUTE

n: 85, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c85.bin

n: 86, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c86.bin

n: 87, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c87.bin

n: 88, type 9
export ROUTE

n: 89, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c89.bin

n: 90, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c90.bin

n: 91, type 0
Convolutional
weights: 2359296, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c91.bin

n: 92, type 14
export SHORTCUT

n: 93, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c93.bin

n: 94, type 0
Convolutional
weights: 2359296, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c94.bin

n: 95, type 14
export SHORTCUT

n: 96, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c96.bin

n: 97, type 0
Convolutional
weights: 2359296, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c97.bin

n: 98, type 14
export SHORTCUT

n: 99, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c99.bin

n: 100, type 0
Convolutional
weights: 2359296, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c100.bin

n: 101, type 14
export SHORTCUT

n: 102, type 0
Convolutional
weights: 262144, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c102.bin

n: 103, type 9
export ROUTE

n: 104, type 0
Convolutional
weights: 1048576, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c104.bin

n: 105, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c105.bin

n: 106, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c106.bin

n: 107, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c107.bin

n: 108, type 3
export MAXPOOL

n: 109, type 9
export ROUTE

n: 110, type 3
export MAXPOOL

n: 111, type 9
export ROUTE

n: 112, type 3
export MAXPOOL

n: 113, type 9
export ROUTE

n: 114, type 0
Convolutional
weights: 1048576, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c114.bin

n: 115, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c115.bin

n: 116, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c116.bin

n: 117, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c117.bin

n: 118, type 32
export UPSAMPLE

n: 119, type 9
export ROUTE

n: 120, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c120.bin

n: 121, type 9
export ROUTE

n: 122, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c122.bin

n: 123, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c123.bin

n: 124, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c124.bin

n: 125, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c125.bin

n: 126, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c126.bin

n: 127, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c127.bin

n: 128, type 32
export UPSAMPLE

n: 129, type 9
export ROUTE

n: 130, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c130.bin

n: 131, type 9
export ROUTE

n: 132, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c132.bin

n: 133, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c133.bin

n: 134, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c134.bin

n: 135, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c135.bin

n: 136, type 0
Convolutional
weights: 32768, biases: 128, batch_normalize: 1, groups: 1
write binary layers/c136.bin

n: 137, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c137.bin

n: 138, type 0
Convolutional
weights: 13824, biases: 54, batch_normalize: 0, groups: 1
write binary layers/c138.bin

n: 139, type 27
export YOLO
mask: 3
biases: 18
mask 0.000000
mask 1.000000
mask 2.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary layers/g139.bin

n: 140, type 9
export ROUTE

n: 141, type 0
Convolutional
weights: 294912, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c141.bin

n: 142, type 9
export ROUTE

n: 143, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c143.bin

n: 144, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c144.bin

n: 145, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c145.bin

n: 146, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c146.bin

n: 147, type 0
Convolutional
weights: 131072, biases: 256, batch_normalize: 1, groups: 1
write binary layers/c147.bin

n: 148, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c148.bin

n: 149, type 0
Convolutional
weights: 27648, biases: 54, batch_normalize: 0, groups: 1
write binary layers/c149.bin

n: 150, type 27
export YOLO
mask: 3
biases: 18
mask 3.000000
mask 4.000000
mask 5.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary layers/g150.bin

n: 151, type 9
export ROUTE

n: 152, type 0
Convolutional
weights: 1179648, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c152.bin

n: 153, type 9
export ROUTE

n: 154, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c154.bin

n: 155, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c155.bin

n: 156, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c156.bin

n: 157, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c157.bin

n: 158, type 0
Convolutional
weights: 524288, biases: 512, batch_normalize: 1, groups: 1
write binary layers/c158.bin

n: 159, type 0
Convolutional
weights: 4718592, biases: 1024, batch_normalize: 1, groups: 1
write binary layers/c159.bin

n: 160, type 0
Convolutional
weights: 55296, biases: 54, batch_normalize: 0, groups: 1
write binary layers/c160.bin

n: 161, type 27
export YOLO
mask: 3
biases: 18
mask 6.000000
mask 7.000000
mask 8.000000
anchor 12.000000
anchor 16.000000
anchor 19.000000
anchor 36.000000
anchor 40.000000
anchor 28.000000
anchor 36.000000
anchor 75.000000
anchor 76.000000
anchor 55.000000
anchor 72.000000
anchor 146.000000
anchor 142.000000
anchor 110.000000
anchor 192.000000
anchor 243.000000
anchor 459.000000
anchor 401.000000
write binary layers/g161.bin

network input size: 1048576
Segmentation fault (core dumped)

Thanks!

Hiroaki-K4 commented 3 years ago

I solved this problem by commenting out network_predict (net, X) in darknet.c. However, an error occurred while doing ./test_yolo4 of tkDNN. The error message is as follows.

Not supported field: batch=64
Not supported field: subdivisions=64
Not supported field: momentum=0.949
Not supported field: decay=0.0005
Not supported field: angle=0
Not supported field: saturation = 1.5
Not supported field: exposure = 1.5
Not supported field: hue=.1
Not supported field: learning_rate=0.001
Not supported field: burn_in=1000
Not supported field: max_batches = 500500
Not supported field: policy=steps
Not supported field: steps=400000,450000
Not supported field: scales=.1,.1
Not supported field: mosaic=1
New NETWORK (tkDNN v0.5, CUDNN v7.603)
Reading weights: I=4 O=32 KERNEL=3x3x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=32 KERNEL=1x1x1
Reading weights: I=32 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=128 KERNEL=3x3x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=128 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=64 O=64 KERNEL=3x3x1
Reading weights: I=64 O=64 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=128 O=128 KERNEL=3x3x1
Reading weights: I=128 O=128 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=256 O=256 KERNEL=3x3x1
Reading weights: I=256 O=256 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Reading weights: I=512 O=512 KERNEL=3x3x1
Reading weights: I=512 O=512 KERNEL=1x1x1
Not supported field: stopbackward=800
Reading weights: I=1024 O=1024 KERNEL=1x1x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=2048 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=128 KERNEL=1x1x1
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=256 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5
Reading weights: I=128 O=256 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=256 KERNEL=1x1x1
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=512 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5
Reading weights: I=256 O=512 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=512 KERNEL=1x1x1
Reading weights: I=512 O=1024 KERNEL=3x3x1
Reading weights: I=1024 O=54 KERNEL=1x1x1
Not supported field: anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
Not supported field: jitter=.3
Not supported field: ignore_thresh = .7
Not supported field: truth_thresh = 1
Not supported field: random=1
Not supported field: iou_thresh=0.213
Not supported field: cls_normalizer=1.0
Not supported field: iou_normalizer=0.07
Not supported field: iou_loss=ciou
Not supported field: max_delta=5

====================== NETWORK MODEL ======================
N.  Layer type       input (H*W,CH)        output (H*W,CH) 
  0 Conv2d           512 x  512,    4  ->  512 x  512,   32
  1 ActivationMish   512 x  512,   32  ->  512 x  512,   32
  2 Conv2d           512 x  512,   32  ->  256 x  256,   64
  3 ActivationMish   256 x  256,   64  ->  256 x  256,   64
  4 Conv2d           256 x  256,   64  ->  256 x  256,   64
  5 ActivationMish   256 x  256,   64  ->  256 x  256,   64
  6 Route            256 x  256,   64  ->  256 x  256,   64
  7 Conv2d           256 x  256,   64  ->  256 x  256,   64
  8 ActivationMish   256 x  256,   64  ->  256 x  256,   64
  9 Conv2d           256 x  256,   64  ->  256 x  256,   32
 10 ActivationMish   256 x  256,   32  ->  256 x  256,   32
 11 Conv2d           256 x  256,   32  ->  256 x  256,   64
 12 ActivationMish   256 x  256,   64  ->  256 x  256,   64
 13 Shortcut         256 x  256,   64  ->  256 x  256,   64
 14 Conv2d           256 x  256,   64  ->  256 x  256,   64
 15 ActivationMish   256 x  256,   64  ->  256 x  256,   64
 16 Route            256 x  256,  128  ->  256 x  256,  128
 17 Conv2d           256 x  256,  128  ->  256 x  256,   64
 18 ActivationMish   256 x  256,   64  ->  256 x  256,   64
 19 Conv2d           256 x  256,   64  ->  128 x  128,  128
 20 ActivationMish   128 x  128,  128  ->  128 x  128,  128
 21 Conv2d           128 x  128,  128  ->  128 x  128,   64
 22 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 23 Route            128 x  128,  128  ->  128 x  128,  128
 24 Conv2d           128 x  128,  128  ->  128 x  128,   64
 25 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 26 Conv2d           128 x  128,   64  ->  128 x  128,   64
 27 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 28 Conv2d           128 x  128,   64  ->  128 x  128,   64
 29 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 30 Shortcut         128 x  128,   64  ->  128 x  128,   64
 31 Conv2d           128 x  128,   64  ->  128 x  128,   64
 32 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 33 Conv2d           128 x  128,   64  ->  128 x  128,   64
 34 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 35 Shortcut         128 x  128,   64  ->  128 x  128,   64
 36 Conv2d           128 x  128,   64  ->  128 x  128,   64
 37 ActivationMish   128 x  128,   64  ->  128 x  128,   64
 38 Route            128 x  128,  128  ->  128 x  128,  128
 39 Conv2d           128 x  128,  128  ->  128 x  128,  128
 40 ActivationMish   128 x  128,  128  ->  128 x  128,  128
 41 Conv2d           128 x  128,  128  ->   64 x   64,  256
 42 ActivationMish    64 x   64,  256  ->   64 x   64,  256
 43 Conv2d            64 x   64,  256  ->   64 x   64,  128
 44 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 45 Route             64 x   64,  256  ->   64 x   64,  256
 46 Conv2d            64 x   64,  256  ->   64 x   64,  128
 47 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 48 Conv2d            64 x   64,  128  ->   64 x   64,  128
 49 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 50 Conv2d            64 x   64,  128  ->   64 x   64,  128
 51 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 52 Shortcut          64 x   64,  128  ->   64 x   64,  128
 53 Conv2d            64 x   64,  128  ->   64 x   64,  128
 54 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 55 Conv2d            64 x   64,  128  ->   64 x   64,  128
 56 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 57 Shortcut          64 x   64,  128  ->   64 x   64,  128
 58 Conv2d            64 x   64,  128  ->   64 x   64,  128
 59 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 60 Conv2d            64 x   64,  128  ->   64 x   64,  128
 61 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 62 Shortcut          64 x   64,  128  ->   64 x   64,  128
 63 Conv2d            64 x   64,  128  ->   64 x   64,  128
 64 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 65 Conv2d            64 x   64,  128  ->   64 x   64,  128
 66 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 67 Shortcut          64 x   64,  128  ->   64 x   64,  128
 68 Conv2d            64 x   64,  128  ->   64 x   64,  128
 69 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 70 Conv2d            64 x   64,  128  ->   64 x   64,  128
 71 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 72 Shortcut          64 x   64,  128  ->   64 x   64,  128
 73 Conv2d            64 x   64,  128  ->   64 x   64,  128
 74 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 75 Conv2d            64 x   64,  128  ->   64 x   64,  128
 76 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 77 Shortcut          64 x   64,  128  ->   64 x   64,  128
 78 Conv2d            64 x   64,  128  ->   64 x   64,  128
 79 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 80 Conv2d            64 x   64,  128  ->   64 x   64,  128
 81 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 82 Shortcut          64 x   64,  128  ->   64 x   64,  128
 83 Conv2d            64 x   64,  128  ->   64 x   64,  128
 84 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 85 Conv2d            64 x   64,  128  ->   64 x   64,  128
 86 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 87 Shortcut          64 x   64,  128  ->   64 x   64,  128
 88 Conv2d            64 x   64,  128  ->   64 x   64,  128
 89 ActivationMish    64 x   64,  128  ->   64 x   64,  128
 90 Route             64 x   64,  256  ->   64 x   64,  256
 91 Conv2d            64 x   64,  256  ->   64 x   64,  256
 92 ActivationMish    64 x   64,  256  ->   64 x   64,  256
 93 Conv2d            64 x   64,  256  ->   32 x   32,  512
 94 ActivationMish    32 x   32,  512  ->   32 x   32,  512
 95 Conv2d            32 x   32,  512  ->   32 x   32,  256
 96 ActivationMish    32 x   32,  256  ->   32 x   32,  256
 97 Route             32 x   32,  512  ->   32 x   32,  512
 98 Conv2d            32 x   32,  512  ->   32 x   32,  256
 99 ActivationMish    32 x   32,  256  ->   32 x   32,  256
100 Conv2d            32 x   32,  256  ->   32 x   32,  256
101 ActivationMish    32 x   32,  256  ->   32 x   32,  256
102 Conv2d            32 x   32,  256  ->   32 x   32,  256
103 ActivationMish    32 x   32,  256  ->   32 x   32,  256
104 Shortcut          32 x   32,  256  ->   32 x   32,  256
105 Conv2d            32 x   32,  256  ->   32 x   32,  256
106 ActivationMish    32 x   32,  256  ->   32 x   32,  256
107 Conv2d            32 x   32,  256  ->   32 x   32,  256
108 ActivationMish    32 x   32,  256  ->   32 x   32,  256
109 Shortcut          32 x   32,  256  ->   32 x   32,  256
110 Conv2d            32 x   32,  256  ->   32 x   32,  256
111 ActivationMish    32 x   32,  256  ->   32 x   32,  256
112 Conv2d            32 x   32,  256  ->   32 x   32,  256
113 ActivationMish    32 x   32,  256  ->   32 x   32,  256
114 Shortcut          32 x   32,  256  ->   32 x   32,  256
115 Conv2d            32 x   32,  256  ->   32 x   32,  256
116 ActivationMish    32 x   32,  256  ->   32 x   32,  256
117 Conv2d            32 x   32,  256  ->   32 x   32,  256
118 ActivationMish    32 x   32,  256  ->   32 x   32,  256
119 Shortcut          32 x   32,  256  ->   32 x   32,  256
120 Conv2d            32 x   32,  256  ->   32 x   32,  256
121 ActivationMish    32 x   32,  256  ->   32 x   32,  256
122 Conv2d            32 x   32,  256  ->   32 x   32,  256
123 ActivationMish    32 x   32,  256  ->   32 x   32,  256
124 Shortcut          32 x   32,  256  ->   32 x   32,  256
125 Conv2d            32 x   32,  256  ->   32 x   32,  256
126 ActivationMish    32 x   32,  256  ->   32 x   32,  256
127 Conv2d            32 x   32,  256  ->   32 x   32,  256
128 ActivationMish    32 x   32,  256  ->   32 x   32,  256
129 Shortcut          32 x   32,  256  ->   32 x   32,  256
130 Conv2d            32 x   32,  256  ->   32 x   32,  256
131 ActivationMish    32 x   32,  256  ->   32 x   32,  256
132 Conv2d            32 x   32,  256  ->   32 x   32,  256
133 ActivationMish    32 x   32,  256  ->   32 x   32,  256
134 Shortcut          32 x   32,  256  ->   32 x   32,  256
135 Conv2d            32 x   32,  256  ->   32 x   32,  256
136 ActivationMish    32 x   32,  256  ->   32 x   32,  256
137 Conv2d            32 x   32,  256  ->   32 x   32,  256
138 ActivationMish    32 x   32,  256  ->   32 x   32,  256
139 Shortcut          32 x   32,  256  ->   32 x   32,  256
140 Conv2d            32 x   32,  256  ->   32 x   32,  256
141 ActivationMish    32 x   32,  256  ->   32 x   32,  256
142 Route             32 x   32,  512  ->   32 x   32,  512
143 Conv2d            32 x   32,  512  ->   32 x   32,  512
144 ActivationMish    32 x   32,  512  ->   32 x   32,  512
145 Conv2d            32 x   32,  512  ->   16 x   16, 1024
146 ActivationMish    16 x   16, 1024  ->   16 x   16, 1024
147 Conv2d            16 x   16, 1024  ->   16 x   16,  512
148 ActivationMish    16 x   16,  512  ->   16 x   16,  512
149 Route             16 x   16, 1024  ->   16 x   16, 1024
150 Conv2d            16 x   16, 1024  ->   16 x   16,  512
151 ActivationMish    16 x   16,  512  ->   16 x   16,  512
152 Conv2d            16 x   16,  512  ->   16 x   16,  512
153 ActivationMish    16 x   16,  512  ->   16 x   16,  512
154 Conv2d            16 x   16,  512  ->   16 x   16,  512
155 ActivationMish    16 x   16,  512  ->   16 x   16,  512
156 Shortcut          16 x   16,  512  ->   16 x   16,  512
157 Conv2d            16 x   16,  512  ->   16 x   16,  512
158 ActivationMish    16 x   16,  512  ->   16 x   16,  512
159 Conv2d            16 x   16,  512  ->   16 x   16,  512
160 ActivationMish    16 x   16,  512  ->   16 x   16,  512
161 Shortcut          16 x   16,  512  ->   16 x   16,  512
162 Conv2d            16 x   16,  512  ->   16 x   16,  512
163 ActivationMish    16 x   16,  512  ->   16 x   16,  512
164 Conv2d            16 x   16,  512  ->   16 x   16,  512
165 ActivationMish    16 x   16,  512  ->   16 x   16,  512
166 Shortcut          16 x   16,  512  ->   16 x   16,  512
167 Conv2d            16 x   16,  512  ->   16 x   16,  512
168 ActivationMish    16 x   16,  512  ->   16 x   16,  512
169 Conv2d            16 x   16,  512  ->   16 x   16,  512
170 ActivationMish    16 x   16,  512  ->   16 x   16,  512
171 Shortcut          16 x   16,  512  ->   16 x   16,  512
172 Conv2d            16 x   16,  512  ->   16 x   16,  512
173 ActivationMish    16 x   16,  512  ->   16 x   16,  512
174 Route             16 x   16, 1024  ->   16 x   16, 1024
175 Conv2d            16 x   16, 1024  ->   16 x   16, 1024
176 ActivationMish    16 x   16, 1024  ->   16 x   16, 1024
177 Conv2d            16 x   16, 1024  ->   16 x   16,  512
178 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
179 Conv2d            16 x   16,  512  ->   16 x   16, 1024
180 ActivationLeaky   16 x   16, 1024  ->   16 x   16, 1024
181 Conv2d            16 x   16, 1024  ->   16 x   16,  512
182 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
183 Pooling           16 x   16,  512  ->   16 x   16,  512
184 Route             16 x   16,  512  ->   16 x   16,  512
185 Pooling           16 x   16,  512  ->   16 x   16,  512
186 Route             16 x   16,  512  ->   16 x   16,  512
187 Pooling           16 x   16,  512  ->   16 x   16,  512
188 Route             16 x   16, 2048  ->   16 x   16, 2048
189 Conv2d            16 x   16, 2048  ->   16 x   16,  512
190 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
191 Conv2d            16 x   16,  512  ->   16 x   16, 1024
192 ActivationLeaky   16 x   16, 1024  ->   16 x   16, 1024
193 Conv2d            16 x   16, 1024  ->   16 x   16,  512
194 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
195 Conv2d            16 x   16,  512  ->   16 x   16,  256
196 ActivationLeaky   16 x   16,  256  ->   16 x   16,  256
197 Upsample          16 x   16,  256  ->   32 x   32,  256
198 Route             32 x   32,  512  ->   32 x   32,  512
199 Conv2d            32 x   32,  512  ->   32 x   32,  256
200 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
201 Route             32 x   32,  512  ->   32 x   32,  512
202 Conv2d            32 x   32,  512  ->   32 x   32,  256
203 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
204 Conv2d            32 x   32,  256  ->   32 x   32,  512
205 ActivationLeaky   32 x   32,  512  ->   32 x   32,  512
206 Conv2d            32 x   32,  512  ->   32 x   32,  256
207 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
208 Conv2d            32 x   32,  256  ->   32 x   32,  512
209 ActivationLeaky   32 x   32,  512  ->   32 x   32,  512
210 Conv2d            32 x   32,  512  ->   32 x   32,  256
211 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
212 Conv2d            32 x   32,  256  ->   32 x   32,  128
213 ActivationLeaky   32 x   32,  128  ->   32 x   32,  128
214 Upsample          32 x   32,  128  ->   64 x   64,  128
215 Route             64 x   64,  256  ->   64 x   64,  256
216 Conv2d            64 x   64,  256  ->   64 x   64,  128
217 ActivationLeaky   64 x   64,  128  ->   64 x   64,  128
218 Route             64 x   64,  256  ->   64 x   64,  256
219 Conv2d            64 x   64,  256  ->   64 x   64,  128
220 ActivationLeaky   64 x   64,  128  ->   64 x   64,  128
221 Conv2d            64 x   64,  128  ->   64 x   64,  256
222 ActivationLeaky   64 x   64,  256  ->   64 x   64,  256
223 Conv2d            64 x   64,  256  ->   64 x   64,  128
224 ActivationLeaky   64 x   64,  128  ->   64 x   64,  128
225 Conv2d            64 x   64,  128  ->   64 x   64,  256
226 ActivationLeaky   64 x   64,  256  ->   64 x   64,  256
227 Conv2d            64 x   64,  256  ->   64 x   64,  128
228 ActivationLeaky   64 x   64,  128  ->   64 x   64,  128
229 Conv2d            64 x   64,  128  ->   64 x   64,  256
230 ActivationLeaky   64 x   64,  256  ->   64 x   64,  256
231 Conv2d            64 x   64,  256  ->   64 x   64,   54
232 Yolo              64 x   64,   54  ->   64 x   64,   54
233 Route             64 x   64,  128  ->   64 x   64,  128
234 Conv2d            64 x   64,  128  ->   32 x   32,  256
235 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
236 Route             32 x   32,  512  ->   32 x   32,  512
237 Conv2d            32 x   32,  512  ->   32 x   32,  256
238 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
239 Conv2d            32 x   32,  256  ->   32 x   32,  512
240 ActivationLeaky   32 x   32,  512  ->   32 x   32,  512
241 Conv2d            32 x   32,  512  ->   32 x   32,  256
242 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
243 Conv2d            32 x   32,  256  ->   32 x   32,  512
244 ActivationLeaky   32 x   32,  512  ->   32 x   32,  512
245 Conv2d            32 x   32,  512  ->   32 x   32,  256
246 ActivationLeaky   32 x   32,  256  ->   32 x   32,  256
247 Conv2d            32 x   32,  256  ->   32 x   32,  512
248 ActivationLeaky   32 x   32,  512  ->   32 x   32,  512
249 Conv2d            32 x   32,  512  ->   32 x   32,   54
250 Yolo              32 x   32,   54  ->   32 x   32,   54
251 Route             32 x   32,  256  ->   32 x   32,  256
252 Conv2d            32 x   32,  256  ->   16 x   16,  512
253 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
254 Route             16 x   16, 1024  ->   16 x   16, 1024
255 Conv2d            16 x   16, 1024  ->   16 x   16,  512
256 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
257 Conv2d            16 x   16,  512  ->   16 x   16, 1024
258 ActivationLeaky   16 x   16, 1024  ->   16 x   16, 1024
259 Conv2d            16 x   16, 1024  ->   16 x   16,  512
260 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
261 Conv2d            16 x   16,  512  ->   16 x   16, 1024
262 ActivationLeaky   16 x   16, 1024  ->   16 x   16, 1024
263 Conv2d            16 x   16, 1024  ->   16 x   16,  512
264 ActivationLeaky   16 x   16,  512  ->   16 x   16,  512
265 Conv2d            16 x   16,  512  ->   16 x   16, 1024
266 ActivationLeaky   16 x   16, 1024  ->   16 x   16, 1024
267 Conv2d            16 x   16, 1024  ->   16 x   16,   54
268 Yolo              16 x   16,   54  ->   16 x   16,   54
===========================================================

GPU free memory: 4444.99 mb.
New NetworkRT (TensorRT v6.01)
Float16 support: 1
Int8 support: 1
DLAs: 2
Selected maxBatchSize: 1
GPU free memory: 4136.89 mb.
Building tensorRT cuda engine...
serialize net
create execution context
Input/outputs numbers: 4
input index = 0 -> output index = 3
Data dim: 1 4 512 512 1
Data dim: 1 54 16 16 1
RtBuffer 0   dim: Data dim: 1 4 512 512 1
RtBuffer 1   dim: Data dim: 1 54 64 64 1
RtBuffer 2   dim: Data dim: 1 54 32 32 1
RtBuffer 3   dim: Data dim: 1 54 16 16 1
Error reading file yolo4/layers/input.bin with n of float: 1048576 seek: 0 size: 4194304

/home/nvidia/tkDNN/src/utils.cpp:58
Aborting...

Thanks!

Hiroaki-K4 commented 3 years ago

I solved it by comment out test_inference.

Hiroaki-K4 commented 3 years ago

Hi. I could convert weights file to rt file. But my 4 channel model can't detect anything. Does the conversion method support 4 channels? Thanks!

mive93 commented 3 years ago

Hi @Hiroaki-K4 Do you still have the problem? It should support that.

Hiroaki-K4 commented 3 years ago

Hi @mive93 Yes, I have the problem. Do I need to change the source code? Please help me!

poornimajd commented 3 years ago

Hi @Hiroaki-K4 How did you solve the issue?

Hiroaki-K4 commented 3 years ago

@poornimajd No, it hasn't been resolved yet.

poornimajd commented 3 years ago

@Hiroaki-K4 I think it is a dependency issue.Was able to solve it when tried it on rtx 2080 ti with cuda 10.

Hiroaki-K4 commented 3 years ago

@poornimajd oh, really? Did you change either tkDNN or somewhere in the darknet that exports weights?

poornimajd commented 3 years ago

The problem was with creation of the layers and debug folder.When I created these folders in rtx 2080 and then used these in gtx 1060 (where the error occurred earlier) with tkdnn ,and also tried with tkdnn I was able to get the trt model for fp32,fp16 and int8 in both the repos.

Hiroaki-K4 commented 3 years ago

@poornimajd ok. Is it possible to detect the expected object with the created rt file?

poornimajd commented 3 years ago

Yes it is possible.

Hiroaki-K4 commented 3 years ago

@poornimajd Oh, Great! Did you comment out what I said earlier in this issue?

poornimajd commented 3 years ago

No.

Hiroaki-K4 commented 3 years ago

@poornimajd ok, thank you. I will try it again.

najingligong1111 commented 3 years ago

Have you solved the problem when input channel is 4 !

Hiroaki-K4 commented 3 years ago

@najingligong1111 yes, I have solved. You need to comment out network_predict (net, X) in darknet.c when you export darknet weights file. You also have to comment out below line in yolo4.cpp when you run ./test_yolo4.

int ret = testInference(input_bins, output_bins, net, netRT);