AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.78k stars 7.96k forks source link

How to change mask? How to change the default save iterations of weights? #5371

Open beizhengren opened 4 years ago

beizhengren commented 4 years ago

Hi @AlexeyAB

1. How to change mask?

I calculated anchors of my own data set with the same clusters 9, and get new anchors: ./darknet detector calc_anchors data/obj.data -num_of_clusters 9 -width 608 -height 608

anchors = 2,  4,   5,  6,   5, 13,  11, 12,  15, 25,  36, 32,  22, 62,  62, 88, 165, 230
# original anchors in yolov4-custom.cfg :
# anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401

I think the mask may be the same with the clusters no changed. Is that wrong? And how should I change the mask?

2. Why I training there are lots of 0 after about 1000 iterations? #5338

Whether the wrong mask result in lots of 0 or not? I also changed

burn_in=2000 # because  I have 2 GPUs
subdivisions=32
max_batches = 200500
steps=160000,180000
classes=7
filters=36 # in convolutional layer just before yolo layer

My training log and cfg files as following: image

My cfg file(click to expand) ``` sh [net] # Testing #batch=1 #subdivisions=1 # Training batch=64 subdivisions=32 width=608 height=608 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1 learning_rate=0.001 burn_in=2000 max_batches = 200500 policy=steps steps=160000,180000 scales=.1,.1 #cutmix=1 mosaic=1 #:104x104 54:52x52 85:26x26 104:13x13 for 416 [convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-7 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-10 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-16 [convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish ########################## [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky ### SPP ### [maxpool] stride=1 size=5 [route] layers=-2 [maxpool] stride=1 size=9 [route] layers=-4 [maxpool] stride=1 size=13 [route] layers=-1,-3,-5,-6 ### End SPP ### [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 85 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 54 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky ########################## [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 0,1,2 # anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky [route] layers = -1, -16 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 3,4,5 # anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky [route] layers = -1, -37 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 6,7,8 # anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 ```

3. How to change the default save iterations of weights?

At beginning, I just want to save weights at 100 iterations instead of 1000 iterations. Thanks!

AlexeyAB commented 4 years ago

And how should I change the mask?

https://github.com/AlexeyAB/darknet#how-to-improve-object-detection

Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so that 1st-[yolo]-layer has anchors larger than 60x60, 2nd larger than 30x30, 3rd remaining. Also you should change the filters=(classes + 5)* before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.


burn_in=2000

Since you use warm up for the first 2000 iterations, it doesn't make sense to save weights before 2000 iterations.

beizhengren commented 4 years ago

@AlexeyAB Thanks, Alexey. I train with no anchors changed in yolov4-custom.cfg, but also a lot of 0. What' wrong? image image

beizhengren commented 4 years ago

@AlexeyAB I found there will be ZEROs after 5000 iterations

beizhengren commented 4 years ago

@AlexeyAB After training about 7500 iterations, map will go down to zero. It confuse me a lot. Please help, thanks! here is part of the training log image

here is the loss map chart image and here is the cfg(classes = 7)

click to expand for cfg ```bash [net] # Testing #batch=1 #subdivisions=1 # Training batch=64 subdivisions=32 width=416 height=416 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1 learning_rate=0.001 burn_in=1000 max_batches = 20000 policy=steps steps=8000,16000 scales=.1,.1 #cutmix=1 mosaic=1 #:104x104 54:52x52 85:26x26 104:13x13 for 416 [convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-7 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-10 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-16 [convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish stopbackward=800 ########################## [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky ### SPP ### [maxpool] stride=1 size=5 [route] layers=-2 [maxpool] stride=1 size=9 [route] layers=-4 [maxpool] stride=1 size=13 [route] layers=-1,-3,-5,-6 ### End SPP ### [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 85 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 54 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky ########################## [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky [route] layers = -1, -16 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky [route] layers = -1, -37 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 ```
AlexeyAB commented 4 years ago

Something wrong with your cfg or dataset.

[net] # Testing #batch=1 #subdivisions=1 # Training batch=64 subdivisions=32 width=416 height=416 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

Set each parameter in a separate line instead of 1 line

beizhengren commented 4 years ago

sorry @AlexeyAB That's in wrong format when I paste it in markdown. I've recitified it, please review again. Thanks!

AlexeyAB commented 4 years ago

The same issue image

beizhengren commented 4 years ago

@AlexeyAB sorry again, please review. I post part of cfg as following, in case of wrong format again

beizhengren commented 4 years ago

image

AlexeyAB commented 4 years ago
beizhengren commented 4 years ago

@AlexeyAB

Run training with flag -show_imgs do you see correct bounded boxes? Show examples with bboxes.

I got some jpg files. But aborted soon showing as the training log at last some image And bounding boxes is correct, here is one of them. image

What dataset do you use? Check training and validation dataset by using Yolo_mark.

I use my own dataset, all the images labeled by labelme. _check traing and validation set by yolomark means to see whether the bounding boxes location is right or not? I trained the the same dataset with tiny-yolov3 cfg, and I can get loss converged. So I feel the training and validation set should be OK.

Show such screenshot

darknet aborted after add the arg -show_imgs, please see the bottom of training log.

click to expand for training log ```sh CUDA-version: 10000 (10000), cuDNN: 7.4.1, GPU count: 2 OpenCV version: 3.3.1 Prepare additional network for mAP calculation... compute_capability = 610, cudnn_half = 0 net.optimized_memory = 0 mini_batch = 1, batch = 32, time_steps = 1, train = 0 layer filters size/strd(dil) input output 0 conv 32 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 32 0.299 BF 1 conv 64 3 x 3/ 2 416 x 416 x 32 -> 208 x 208 x 64 1.595 BF 2 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 3 route 1 -> 208 x 208 x 64 4 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 5 conv 32 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 32 0.177 BF 6 conv 64 3 x 3/ 1 208 x 208 x 32 -> 208 x 208 x 64 1.595 BF 7 Shortcut Layer: 4, wt = 0, wn = 0, outputs: 208 x 208 x 64 0.003 BF 8 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 9 route 8 2 -> 208 x 208 x 128 10 conv 64 1 x 1/ 1 208 x 208 x 128 -> 208 x 208 x 64 0.709 BF 11 conv 128 3 x 3/ 2 208 x 208 x 64 -> 104 x 104 x 128 1.595 BF 12 conv 64 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 64 0.177 BF 13 route 11 -> 104 x 104 x 128 14 conv 64 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 64 0.177 BF 15 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 16 conv 64 3 x 3/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.797 BF 17 Shortcut Layer: 14, wt = 0, wn = 0, outputs: 104 x 104 x 64 0.001 BF 18 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 19 conv 64 3 x 3/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.797 BF 20 Shortcut Layer: 17, wt = 0, wn = 0, outputs: 104 x 104 x 64 0.001 BF 21 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 22 route 21 12 -> 104 x 104 x 128 23 conv 128 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 128 0.354 BF 24 conv 256 3 x 3/ 2 104 x 104 x 128 -> 52 x 52 x 256 1.595 BF 25 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 26 route 24 -> 52 x 52 x 256 27 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 28 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 29 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 30 Shortcut Layer: 27, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 31 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 32 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 33 Shortcut Layer: 30, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 34 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 35 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 36 Shortcut Layer: 33, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 37 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 38 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 39 Shortcut Layer: 36, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 40 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 41 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 42 Shortcut Layer: 39, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 43 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 44 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 45 Shortcut Layer: 42, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 46 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 47 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 48 Shortcut Layer: 45, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 49 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 50 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 51 Shortcut Layer: 48, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 52 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 53 route 52 25 -> 52 x 52 x 256 54 conv 256 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 256 0.354 BF 55 conv 512 3 x 3/ 2 52 x 52 x 256 -> 26 x 26 x 512 1.595 BF 56 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 57 route 55 -> 26 x 26 x 512 58 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 59 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 60 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 61 Shortcut Layer: 58, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 62 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 63 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 64 Shortcut Layer: 61, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 65 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 66 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 67 Shortcut Layer: 64, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 68 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 69 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 70 Shortcut Layer: 67, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 71 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 72 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 73 Shortcut Layer: 70, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 74 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 75 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 76 Shortcut Layer: 73, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 77 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 78 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 79 Shortcut Layer: 76, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 80 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 81 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 82 Shortcut Layer: 79, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 83 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 84 route 83 56 -> 26 x 26 x 512 85 conv 512 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 512 0.354 BF 86 conv 1024 3 x 3/ 2 26 x 26 x 512 -> 13 x 13 x1024 1.595 BF 87 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 88 route 86 -> 13 x 13 x1024 89 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 90 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 91 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 92 Shortcut Layer: 89, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 93 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 94 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 95 Shortcut Layer: 92, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 96 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 97 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 98 Shortcut Layer: 95, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 99 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 100 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 101 Shortcut Layer: 98, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 102 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 103 route 102 87 -> 13 x 13 x1024 104 conv 1024 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x1024 0.354 BF 105 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 106 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 107 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 108 max 5x 5/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.002 BF 109 route 107 -> 13 x 13 x 512 110 max 9x 9/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.007 BF 111 route 107 -> 13 x 13 x 512 112 max 13x13/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.015 BF 113 route 112 110 108 107 -> 13 x 13 x2048 114 conv 512 1 x 1/ 1 13 x 13 x2048 -> 13 x 13 x 512 0.354 BF 115 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 116 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 117 conv 256 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 256 0.044 BF 118 upsample 2x 13 x 13 x 256 -> 26 x 26 x 256 119 route 85 -> 26 x 26 x 512 120 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 121 route 120 118 -> 26 x 26 x 512 122 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 123 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 124 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 125 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 126 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 127 conv 128 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 128 0.044 BF 128 upsample 2x 26 x 26 x 128 -> 52 x 52 x 128 129 route 54 -> 52 x 52 x 256 130 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 131 route 130 128 -> 52 x 52 x 256 132 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 133 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 134 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 135 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 136 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 137 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 138 conv 36 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 36 0.050 BF 139 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.20 nms_kind: greedynms (1), beta = 0.600000 140 route 136 -> 52 x 52 x 128 141 conv 256 3 x 3/ 2 52 x 52 x 128 -> 26 x 26 x 256 0.399 BF 142 route 141 126 -> 26 x 26 x 512 143 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 144 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 145 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 146 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 147 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 148 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 149 conv 36 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 36 0.025 BF 150 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.10 nms_kind: greedynms (1), beta = 0.600000 151 route 147 -> 26 x 26 x 256 152 conv 512 3 x 3/ 2 26 x 26 x 256 -> 13 x 13 x 512 0.399 BF 153 route 152 116 -> 13 x 13 x1024 154 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 155 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 156 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 157 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 158 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 159 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 160 conv 36 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 36 0.012 BF 161 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05 nms_kind: greedynms (1), beta = 0.600000 Total BFLOPS 59.607 avg_outputs = 490567 Allocate additional workspace_size = 52.43 MB yolov4-zerotech compute_capability = 610, cudnn_half = 0 net.optimized_memory = 0 mini_batch = 2, batch = 64, time_steps = 1, train = 1 layer filters size/strd(dil) input output 0 conv 32 3 x 3/ 1 416 x 416 x 3 -> 416 x 416 x 32 0.299 BF 1 conv 64 3 x 3/ 2 416 x 416 x 32 -> 208 x 208 x 64 1.595 BF 2 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 3 route 1 -> 208 x 208 x 64 4 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 5 conv 32 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 32 0.177 BF 6 conv 64 3 x 3/ 1 208 x 208 x 32 -> 208 x 208 x 64 1.595 BF 7 Shortcut Layer: 4, wt = 0, wn = 0, outputs: 208 x 208 x 64 0.003 BF 8 conv 64 1 x 1/ 1 208 x 208 x 64 -> 208 x 208 x 64 0.354 BF 9 route 8 2 -> 208 x 208 x 128 10 conv 64 1 x 1/ 1 208 x 208 x 128 -> 208 x 208 x 64 0.709 BF 11 conv 128 3 x 3/ 2 208 x 208 x 64 -> 104 x 104 x 128 1.595 BF 12 conv 64 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 64 0.177 BF 13 route 11 -> 104 x 104 x 128 14 conv 64 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 64 0.177 BF 15 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 16 conv 64 3 x 3/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.797 BF 17 Shortcut Layer: 14, wt = 0, wn = 0, outputs: 104 x 104 x 64 0.001 BF 18 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 19 conv 64 3 x 3/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.797 BF 20 Shortcut Layer: 17, wt = 0, wn = 0, outputs: 104 x 104 x 64 0.001 BF 21 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF 22 route 21 12 -> 104 x 104 x 128 23 conv 128 1 x 1/ 1 104 x 104 x 128 -> 104 x 104 x 128 0.354 BF 24 conv 256 3 x 3/ 2 104 x 104 x 128 -> 52 x 52 x 256 1.595 BF 25 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 26 route 24 -> 52 x 52 x 256 27 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 28 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 29 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 30 Shortcut Layer: 27, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 31 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 32 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 33 Shortcut Layer: 30, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 34 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 35 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 36 Shortcut Layer: 33, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 37 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 38 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 39 Shortcut Layer: 36, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 40 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 41 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 42 Shortcut Layer: 39, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 43 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 44 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 45 Shortcut Layer: 42, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 46 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 47 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 48 Shortcut Layer: 45, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 49 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 50 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF 51 Shortcut Layer: 48, wt = 0, wn = 0, outputs: 52 x 52 x 128 0.000 BF 52 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF 53 route 52 25 -> 52 x 52 x 256 54 conv 256 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 256 0.354 BF 55 conv 512 3 x 3/ 2 52 x 52 x 256 -> 26 x 26 x 512 1.595 BF 56 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 57 route 55 -> 26 x 26 x 512 58 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 59 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 60 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 61 Shortcut Layer: 58, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 62 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 63 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 64 Shortcut Layer: 61, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 65 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 66 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 67 Shortcut Layer: 64, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 68 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 69 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 70 Shortcut Layer: 67, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 71 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 72 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 73 Shortcut Layer: 70, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 74 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 75 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 76 Shortcut Layer: 73, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 77 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 78 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 79 Shortcut Layer: 76, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 80 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 81 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF 82 Shortcut Layer: 79, wt = 0, wn = 0, outputs: 26 x 26 x 256 0.000 BF 83 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF 84 route 83 56 -> 26 x 26 x 512 85 conv 512 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 512 0.354 BF 86 conv 1024 3 x 3/ 2 26 x 26 x 512 -> 13 x 13 x1024 1.595 BF 87 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 88 route 86 -> 13 x 13 x1024 89 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 90 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 91 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 92 Shortcut Layer: 89, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 93 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 94 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 95 Shortcut Layer: 92, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 96 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 97 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 98 Shortcut Layer: 95, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 99 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 100 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF 101 Shortcut Layer: 98, wt = 0, wn = 0, outputs: 13 x 13 x 512 0.000 BF 102 conv 512 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.089 BF 103 route 102 87 -> 13 x 13 x1024 104 conv 1024 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x1024 0.354 BF 105 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 106 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 107 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 108 max 5x 5/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.002 BF 109 route 107 -> 13 x 13 x 512 110 max 9x 9/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.007 BF 111 route 107 -> 13 x 13 x 512 112 max 13x13/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.015 BF 113 route 112 110 108 107 -> 13 x 13 x2048 114 conv 512 1 x 1/ 1 13 x 13 x2048 -> 13 x 13 x 512 0.354 BF 115 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 116 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 117 conv 256 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 256 0.044 BF 118 upsample 2x 13 x 13 x 256 -> 26 x 26 x 256 119 route 85 -> 26 x 26 x 512 120 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 121 route 120 118 -> 26 x 26 x 512 122 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 123 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 124 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 125 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 126 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 127 conv 128 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 128 0.044 BF 128 upsample 2x 26 x 26 x 128 -> 52 x 52 x 128 129 route 54 -> 52 x 52 x 256 130 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 131 route 130 128 -> 52 x 52 x 256 132 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 133 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 134 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 135 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 136 conv 128 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 128 0.177 BF 137 conv 256 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 256 1.595 BF 138 conv 36 1 x 1/ 1 52 x 52 x 256 -> 52 x 52 x 36 0.050 BF 139 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.20 nms_kind: greedynms (1), beta = 0.600000 140 route 136 -> 52 x 52 x 128 141 conv 256 3 x 3/ 2 52 x 52 x 128 -> 26 x 26 x 256 0.399 BF 142 route 141 126 -> 26 x 26 x 512 143 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 144 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 145 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 146 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 147 conv 256 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 256 0.177 BF 148 conv 512 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 512 1.595 BF 149 conv 36 1 x 1/ 1 26 x 26 x 512 -> 26 x 26 x 36 0.025 BF 150 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.10 nms_kind: greedynms (1), beta = 0.600000 151 route 147 -> 26 x 26 x 256 152 conv 512 3 x 3/ 2 26 x 26 x 256 -> 13 x 13 x 512 0.399 BF 153 route 152 116 -> 13 x 13 x1024 154 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 155 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 156 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 157 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 158 conv 512 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 512 0.177 BF 159 conv 1024 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x1024 1.595 BF 160 conv 36 1 x 1/ 1 13 x 13 x1024 -> 13 x 13 x 36 0.012 BF 161 yolo [yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05 nms_kind: greedynms (1), beta = 0.600000 Total BFLOPS 59.607 avg_outputs = 490567 Allocate additional workspace_size = 52.43 MB Loading weights from my_train/backup_2020/yolov4-zerotech_last.weights... seen 64, trained: 454 K-images (7 Kilo-batches_64) Done! Loaded 162 layers from weights-file Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005 If error occurs - run training with flag: -dont_show Resizing, random_coef = 1.40 608 x 608 Create 6 permanent cpu-threads [xcb] Unknown request in queue while dequeuing [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called [xcb] Aborting, sorry about that. darknet: ../../src/xcb_io.c:179: dequeue_pending_request: Assertion `!xcb_xlib_unknown_req_in_deq' failed. [xcb] Unknown request in queue while dequeuing Aborted (core dumped) ```
AlexeyAB commented 4 years ago

means to see whether the bounding boxes location is right or not?

Yes.

Show several files aug_...jpg

It seems that your objects are small, try to train with width=608 height=608 or width=832 height=832

Also add max=200 in the [yolo] layers.

beizhengren commented 4 years ago

@AlexeyAB Thanks Alexey. I'am trying it!

beizhengren commented 4 years ago

@AlexeyAB I have been training about 50k iterations, with no loss divergence. It seems all right. Thanks! image

beizhengren commented 4 years ago

Hi @AlexeyAB After about 130’000 iterations, I met the same question that mAP goes down 0 suddenly. How can I check where is wrong? And here is my cfg, Thanks!

please click ```bash [net] # Testing #batch=1 #subdivisions=1 # Training batch=64 subdivisions=32 width=608 height=608 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1 learning_rate=0.0001 burn_in=1000 max_batches = 200000 policy=steps steps=160000, 180000 scales=.1,.1 #cutmix=1 mosaic=0 #:104x104 54:52x52 85:26x26 104:13x13 for 416 [convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-7 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-10 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-16 [convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish stopbackward=800 ########################## [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky ### SPP ### [maxpool] stride=1 size=5 [route] layers=-2 [maxpool] stride=1 size=9 [route] layers=-4 [maxpool] stride=1 size=13 [route] layers=-1,-3,-5,-6 ### End SPP ### [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 85 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 54 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky ########################## [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max=200 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky [route] layers = -1, -16 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max=200 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky [route] layers = -1, -37 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 # anchors = 2, 4, 5, 6, 5, 13, 11, 12, 15, 25, 36, 32, 22, 62, 62, 88, 165,230 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max=200 ```

image

AlexeyAB commented 4 years ago
beizhengren commented 4 years ago

@AlexeyAB Yes, the smallest object is just about 10 x 10 pixels. I will resume trainging with 120 000 weights and use the latest darknet. Thanks!

beizhengren commented 4 years ago

Hi @AlexeyAB I have completed training 200 000 iterations,here are my mAP result and corresponding chart. I found mAP only 43%, how should I improve it? Thank you!

calculation mAP (mean average precision)...
3612
 detections_count = 780834, unique_truth_count = 228593  
class_id = 0, name = person, ap = 16.54%     (TP = 3527, FP = 5339) 
class_id = 1, name = boat, ap = 59.11%       (TP = 1586, FP = 563) 
class_id = 2, name = car, ap = 47.13%        (TP = 95162, FP = 58862) 
class_id = 3, name = bus, ap = 53.09%        (TP = 2798, FP = 1253) 
class_id = 4, name = truck, ap = 48.67%      (TP = 6561, FP = 3559) 
class_id = 5, name = cyclist, ap = 24.74%        (TP = 704, FP = 770) 
class_id = 6, name = heavy equipment, ap = 50.46%        (TP = 1062, FP = 567) 

 for conf_thresh = 0.25, precision = 0.61, recall = 0.49, F1-score = 0.54 
 for conf_thresh = 0.25, TP = 111400, FP = 70913, FN = 117193, average IoU = 45.72 % 

 IoU threshold = 50 %, used Area-Under-Curve for each unique Recall 
 mean average precision (mAP@0.50) = 0.428187, or 42.82 % 
Total Detection Time: 177 Seconds

Set -points flag:
 `-points 101` for MS COCO 
 `-points 11` for PascalVOC 2007 (uncomment `difficult` in voc.data) 
 `-points 0` (AUC) for ImageNet, PascalVOC 2010-2012, your custom dataset

 mean_average_precision (mAP@0.5) = 0.428187 

image

AlexeyAB commented 4 years ago

Train with higher network resolution in cfg-file width=832 height=832

Also

Only if you are an expert in neural detection networks - recalculate anchors for your dataset for width and height from cfg-file: darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416 then set the same 9 anchors in each of 3 [yolo]-layers in your cfg-file. But you should change indexes of anchors masks= for each [yolo]-layer, so for YOLOv4 the 1st-[yolo]-layer has anchors smaller than 30x30, 2nd smaller than 60x60, 3rd remaining, and vice versa for YOLOv3. Also you should change the filters=(classes + 5)* before each [yolo]-layer. If many of the calculated anchors do not fit under the appropriate layers - then just try using all the default anchors.

beizhengren commented 4 years ago

@AlexeyAB Thanks!I'll try it.