Open yrc08 opened 3 years ago
[net]
batch=64 subdivisions=64
width=416 height=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1
learning_rate=0.001 burn_in=1000 max_batches = 2000 policy=steps steps=1600,1800 scales=.1,.1
[convolutional] batch_normalize=1 filters=16 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=1
[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky
###########
[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky
[maxpool] stride=1 size=5
[route] layers=-2
[maxpool] stride=1 size=9
[route] layers=-4
[maxpool] stride=1 size=13
[route] layers=-1,-3,-5,-6
[convolutional] size=1 stride=1 pad=1 filters=18 activation=linear
[yolo] mask = 3,4,5 anchors = 17, 19, 48, 23, 34, 67, 99, 48, 112,119, 252,215 classes=1 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1
[route] layers = -4
[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky
[upsample] stride=2
[route] layers = -1, 8
[yolo] mask = 0,1,2 anchors = 17, 19, 48, 23, 34, 67, 99, 48, 112,119, 252,215 classes=1 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 The former does not add the spp module, the latter adds the spp module @AlexeyAB Looking forward to your reply
@AlexeyAB The former add the spp module, the latter does not adds the spp module. I want to know if the spp module is added in the wrong position
[net]
Testing
batch=64 subdivisions=64
Training
batch=64
subdivisions=64
width=416 height=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1
learning_rate=0.001 burn_in=1000 max_batches = 2000 policy=steps steps=1600,1800 scales=.1,.1
[convolutional] batch_normalize=1 filters=16 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=1
[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky
###########
[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky
SPP
[maxpool] stride=1 size=5
[route] layers=-2
[maxpool] stride=1 size=9
[route] layers=-4
[maxpool] stride=1 size=13
[route] layers=-1,-3,-5,-6
End SPP
[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[convolutional] size=1 stride=1 pad=1 filters=18 activation=linear
[yolo] mask = 3,4,5 anchors = 17, 19, 48, 23, 34, 67, 99, 48, 112,119, 252,215 classes=1 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1
[route] layers = -4
[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky
[upsample] stride=2
[route] layers = -1, 8
[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky
[convolutional] size=1 stride=1 pad=1 filters=18 activation=linear
[yolo] mask = 0,1,2 anchors = 17, 19, 48, 23, 34, 67, 99, 48, 112,119, 252,215 classes=1 num=6 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 The former does not add the spp module, the latter adds the spp module @AlexeyAB Looking forward to your reply