AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.8k stars 7.97k forks source link

yolov4-tiny depthwise convolution issue #6308

Open kadirbeytorun opened 4 years ago

kadirbeytorun commented 4 years ago

Hello,

I observed yolov4 tiny model has some groups and group_id parameters in route layers 3 times. Apparently the purpose of these layers is to divide the tensor, use the second half. Like changing filter size from 128 to 64

Due to my projects limitations, we cannot deal with depthwise layer like layers, and I was wondering if we can use 1x1 filters to reshape our tensors, instead of dividing them in half.

If it's possible to do so, how should I edit my cfg file? How can replace that route layer's dividing behaviour, with 1x1 convolution layers?

Thanks in advance

WongKinYiu commented 4 years ago

original:

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[route]
layers=-1
groups=2
group_id=1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[route]
layers = -1,-2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -6,-1

equivalence:

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[route]
layers=-2

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[route]
layers = -1,-2

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[route]
layers = -7,-5,-1
kadirbeytorun commented 4 years ago

Hey @WongKinYiu , I see you halved the filter in the convolution layer before the route layer. However that's not what I asked. My goal is to replace only route layer's dividing job with 1x1 convolutions, not change the existing convolution layer filter sizes.

Besides, changing filter size from 64 to 32 on the layer before route layer breaks the other route layer, since that other route layer requires 64 channel size.

kadirbeytorun commented 4 years ago

I just realized that you actually didn't write the 64 filter convolution layer on the second part and I got mistaken that you changed 64 to 32.

I tried replacing route layer with 1x1 32 convolutions instead and network looks alright in netron visualizer.

Is there a specific reason you didn't remove route layer? Because with the way you did it, next convolution is taking 64 channel tensor as input while it was supposed to take 32 channel tensor.