AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.68k stars 7.96k forks source link

How can I reduce Height and Weight in cfg config file? #388

Open Rutulpatel7077 opened 6 years ago

Rutulpatel7077 commented 6 years ago

[net]

Testing

batch=1 subdivisions=1

Training

batch=64

subdivisions=8

width=608 height=608 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

learning_rate=0.001 burn_in=1000 max_batches = 500200 policy=steps steps=400000,450000 scales=.1,.1

[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky

[maxpool] size=2 stride=2

[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky

[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky

#######

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[route] layers=-9

[convolutional] batch_normalize=1 size=1 stride=1 pad=1 filters=64 activation=leaky

[reorg] stride=2

[route] layers=-1,-4

[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky

[convolutional] size=1 stride=1 pad=1 filters=425 activation=linear

[region] anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828 bias_match=1 classes=80 coords=4 num=5 softmax=1 jitter=.3 rescore=1

object_scale=5 noobject_scale=1 class_scale=1 coord_scale=1

absolute=1 thresh = .1 random=1

Rutulpatel7077 commented 6 years ago

ValueError: Dimension 1 in both shapes must be equal, but are 12 and 13. Shapes are [?,12,12] and [?,13,13]. for 'concat_1' (op: 'ConcatV2') with input shapes: [?,12,12,256], [?,13,13,1024], [] and with computed input tensors: input[2] = <3>.

got this error while changing directly in yolo.cfg

AlexeyAB commented 6 years ago
RogersNtr commented 5 years ago

Hi @AlexeyAB , A question on the .cfg file actually. I'm using Keras framework. So I want to do a conversion using .cfg and .weights to a .h5 file format. However, I'm a bit confused with the "how it works for subdivision . I want precisely to use a height and width of 608 and a batch size of 32. In this case I don't really know what is the subdivision that I need to put . Thanks for your answer in advance!

AlexeyAB commented 5 years ago

@RogersNtr

batch = batch_from_cfg mini_batch = batch_from_cfg / subdivisions_from_cfg


mini_batch - the number of images that will be uploaded to the GPU at a time batch - the number of images for which the weights_update will be accumulated, and the the weights will be updated

RogersNtr commented 5 years ago

@AlexeyAB , thanks for the answer!!