AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.66k stars 7.96k forks source link

I really have no idea about what's wrong when I try to use 'batch_detection'. Has anyone come across it? #7815

Open qwert1887 opened 3 years ago

qwert1887 commented 3 years ago

Hi, When I try to detect images in batch use the function 'batch_detection'[batch_size=8], my yolov4.cfg[512*512] and yolov4.weights both are default. I can detection 8 images at the same time; BUT BUT BUT, when I tried train my own network with yolov4-custom.cfg. My classes are 6, I got the Error 'Segmentation fault (core dumped) '[maybe it's occurred some unknown error in "darknet.network_predict_batch"], i tried complied libdarknet.so in debug mode but i didn‘t get anything usefull even in gdb, my cfg file as below, i wonder why it is so weird. Does anyone know how to solved it? THX! :

[net]
# Testing
#batch=1
#subdivisions=1
# Training
batch=8
subdivisions=8
width=512
height=512
channels=3
momentum=0.949
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 500500
policy=steps
steps=400000,450000
scales=.1,.1

#cutmix=1
mosaic=1

#:104x104 54:52x52 85:26x26 104:13x13 for 416

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-7

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-10

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-28

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-28

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

# Downsample

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[route]
layers = -2

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=mish

[shortcut]
from=-3
activation=linear

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=mish

[route]
layers = -1,-16

[convolutional]
batch_normalize=1
filters=1024
size=1
stride=1
pad=1
activation=mish
stopbackward=800

##########################

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

### SPP ###
[maxpool]
stride=1
size=5

[route]
layers=-2

[maxpool]
stride=1
size=9

[route]
layers=-4

[maxpool]
stride=1
size=13

[route]
layers=-1,-3,-5,-6
### End SPP ###

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = 85

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -1, -3

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = 54

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -1, -3

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

##########################

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=33
activation=linear

[yolo]
mask = 0,1,2
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=6
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.2
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=256
activation=leaky

[route]
layers = -1, -16

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=33
activation=linear

[yolo]
mask = 3,4,5
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=6
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
scale_x_y = 1.1
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

[route]
layers = -4

[convolutional]
batch_normalize=1
size=3
stride=2
pad=1
filters=512
activation=leaky

[route]
layers = -1, -37

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=33
activation=linear

[yolo]
mask = 6,7,8
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=6
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=0
scale_x_y = 1.05
iou_thresh=0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5

log file as below:

 Try to load cfg: ./cfg/myown.cfg, weights: ./cfg/myown.weights, clear = 0 
 0 : compute_capability = 610, cudnn_half = 0, GPU: GeForce GTX 1070 
net.optimized_memory = 0 
mini_batch = 8, batch = 8, time_steps = 1, train = 0 
   layer   filters  size/strd(dil)      input                output
   0 conv     32       3 x 3/ 1    512 x 512 x   3 ->  512 x 512 x  32 0.453 BF
   1 conv     64       3 x 3/ 2    512 x 512 x  32 ->  256 x 256 x  64 2.416 BF
   2 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   3 route  1                                  ->  256 x 256 x  64 
   4 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   5 conv     32       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  32 0.268 BF
   6 conv     64       3 x 3/ 1    256 x 256 x  32 ->  256 x 256 x  64 2.416 BF
   7 Shortcut Layer: 4,  wt = 0, wn = 0, outputs: 256 x 256 x  64 0.004 BF
   8 conv     64       1 x 1/ 1    256 x 256 x  64 ->  256 x 256 x  64 0.537 BF
   9 route  8 2                                ->  256 x 256 x 128 
  10 conv     64       1 x 1/ 1    256 x 256 x 128 ->  256 x 256 x  64 1.074 BF
  11 conv    128       3 x 3/ 2    256 x 256 x  64 ->  128 x 128 x 128 2.416 BF
  12 conv     64       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x  64 0.268 BF
  13 route  11                                 ->  128 x 128 x 128 
  14 conv     64       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x  64 0.268 BF
  15 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  16 conv     64       3 x 3/ 1    128 x 128 x  64 ->  128 x 128 x  64 1.208 BF
  17 Shortcut Layer: 14,  wt = 0, wn = 0, outputs: 128 x 128 x  64 0.001 BF
  18 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  19 conv     64       3 x 3/ 1    128 x 128 x  64 ->  128 x 128 x  64 1.208 BF
  20 Shortcut Layer: 17,  wt = 0, wn = 0, outputs: 128 x 128 x  64 0.001 BF
  21 conv     64       1 x 1/ 1    128 x 128 x  64 ->  128 x 128 x  64 0.134 BF
  22 route  21 12                              ->  128 x 128 x 128 
  23 conv    128       1 x 1/ 1    128 x 128 x 128 ->  128 x 128 x 128 0.537 BF
  24 conv    256       3 x 3/ 2    128 x 128 x 128 ->   64 x  64 x 256 2.416 BF
  25 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
  26 route  24                                 ->   64 x  64 x 256 
  27 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
  28 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  29 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  31 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  32 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  34 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  35 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  37 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  38 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  39 Shortcut Layer: 36,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  40 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  41 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  42 Shortcut Layer: 39,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  43 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  44 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  45 Shortcut Layer: 42,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  46 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  47 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  48 Shortcut Layer: 45,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  49 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  50 conv    128       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 128 1.208 BF
  51 Shortcut Layer: 48,  wt = 0, wn = 0, outputs:  64 x  64 x 128 0.001 BF
  52 conv    128       1 x 1/ 1     64 x  64 x 128 ->   64 x  64 x 128 0.134 BF
  53 route  52 25                              ->   64 x  64 x 256 
  54 conv    256       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 256 0.537 BF
  55 conv    512       3 x 3/ 2     64 x  64 x 256 ->   32 x  32 x 512 2.416 BF
  56 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
  57 route  55                                 ->   32 x  32 x 512 
  58 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
  59 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  60 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  61 Shortcut Layer: 58,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  62 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  63 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  64 Shortcut Layer: 61,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  65 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  66 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  67 Shortcut Layer: 64,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  68 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  69 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  70 Shortcut Layer: 67,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  71 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  72 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  73 Shortcut Layer: 70,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  74 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  75 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  76 Shortcut Layer: 73,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  77 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  78 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  79 Shortcut Layer: 76,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  80 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  81 conv    256       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 256 1.208 BF
  82 Shortcut Layer: 79,  wt = 0, wn = 0, outputs:  32 x  32 x 256 0.000 BF
  83 conv    256       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 256 0.134 BF
  84 route  83 56                              ->   32 x  32 x 512 
  85 conv    512       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 512 0.537 BF
  86 conv   1024       3 x 3/ 2     32 x  32 x 512 ->   16 x  16 x1024 2.416 BF
  87 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
  88 route  86                                 ->   16 x  16 x1024 
  89 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
  90 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  91 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  92 Shortcut Layer: 89,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  93 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  94 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  95 Shortcut Layer: 92,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  96 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
  97 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
  98 Shortcut Layer: 95,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
  99 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
 100 conv    512       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x 512 1.208 BF
 101 Shortcut Layer: 98,  wt = 0, wn = 0, outputs:  16 x  16 x 512 0.000 BF
 102 conv    512       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.134 BF
 103 route  102 87                             ->   16 x  16 x1024 
 104 conv   1024       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x1024 0.537 BF
 105 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 106 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 107 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 108 max                5x 5/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.003 BF
 109 route  107                                    ->   16 x  16 x 512 
 110 max                9x 9/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.011 BF
 111 route  107                                    ->   16 x  16 x 512 
 112 max               13x13/ 1     16 x  16 x 512 ->   16 x  16 x 512 0.022 BF
 113 route  112 110 108 107                        ->   16 x  16 x2048 
 114 conv    512       1 x 1/ 1     16 x  16 x2048 ->   16 x  16 x 512 0.537 BF
 115 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 116 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 117 conv    256       1 x 1/ 1     16 x  16 x 512 ->   16 x  16 x 256 0.067 BF
 118 upsample                 2x    16 x  16 x 256 ->   32 x  32 x 256
 119 route  85                                 ->   32 x  32 x 512 
 120 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 121 route  120 118                                ->   32 x  32 x 512 
 122 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 123 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 124 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 125 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 126 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 127 conv    128       1 x 1/ 1     32 x  32 x 256 ->   32 x  32 x 128 0.067 BF
 128 upsample                 2x    32 x  32 x 128 ->   64 x  64 x 128
 129 route  54                                 ->   64 x  64 x 256 
 130 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 131 route  130 128                                ->   64 x  64 x 256 
 132 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 133 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 134 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 135 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 136 conv    128       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x 128 0.268 BF
 137 conv    256       3 x 3/ 1     64 x  64 x 128 ->   64 x  64 x 256 2.416 BF
 138 conv     33       1 x 1/ 1     64 x  64 x 256 ->   64 x  64 x  33 0.069 BF
 139 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.20
nms_kind: greedynms (1), beta = 0.600000 
 140 route  136                                    ->   64 x  64 x 128 
 141 conv    256       3 x 3/ 2     64 x  64 x 128 ->   32 x  32 x 256 0.604 BF
 142 route  141 126                                ->   32 x  32 x 512 
 143 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 144 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 145 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 146 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 147 conv    256       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x 256 0.268 BF
 148 conv    512       3 x 3/ 1     32 x  32 x 256 ->   32 x  32 x 512 2.416 BF
 149 conv     33       1 x 1/ 1     32 x  32 x 512 ->   32 x  32 x  33 0.035 BF
 150 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000 
 151 route  147                                    ->   32 x  32 x 256 
 152 conv    512       3 x 3/ 2     32 x  32 x 256 ->   16 x  16 x 512 0.604 BF
 153 route  152 116                                ->   16 x  16 x1024 
 154 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 155 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 156 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 157 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 158 conv    512       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x 512 0.268 BF
 159 conv   1024       3 x 3/ 1     16 x  16 x 512 ->   16 x  16 x1024 2.416 BF
 160 conv     33       1 x 1/ 1     16 x  16 x1024 ->   16 x  16 x  33 0.017 BF
 161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000 
Total BFLOPS 90.281 
avg_outputs = 742908 
 Allocate additional workspace_size = 52.43 MB 
 Try to load weights: ./cfg/myown.weights 
Loading weights from ./cfg/myown.weights...
 seen 64, trained: 574 K-images (8 Kilo-batches_64) 
Done! Loaded 162 layers from weights-file 
Loaded - names_list: data/myown.names, classes = 6 
lsd1994 commented 3 years ago

batch_size=8 is just for training, you can't inference multiple images at same time.

qwert1887 commented 3 years ago

@lsd1994 Thanks for your replay,but what's the feature of functino batch_detection?

matt-sharp commented 3 years ago

You can perform batch inference with OpenCV DNN module.

qwert1887 commented 3 years ago

@matt-sharp thanks a lot, i tried to use multi-process rather than batch inference, but i think it's not the best idea, because it cost more hardware resource

qwert1887 commented 3 years ago

@AlexeyAB Any suggestions?