WongKinYiu / PartialResidualNetworks

partial residual networks
100 stars 24 forks source link

Pelee-YOLOv3 #6

Open MichaelCong opened 5 years ago

MichaelCong commented 5 years ago

Thank you very much for sharing your project. Can you provide a related cfg file combining Pelee and yolo? Thank you very much, or you told me how to combine the two. ‘’‘’ Here we provide some experimental results on COCO test-dev set which are not listed in the paper.

Model Size mAP@0.5 BFLOPs # Parameter Pelee [3] 304x304 38.3 2.58 5.98M Pelee-PRN 320x320 40.9 2.39 3.16M Pelee-YOLOv3 [1] 320x320 41.4 2.99 3.91M Pelee-FPN [4] 320x320 41.4 2.86 3.75M Pelee-PRN-3l 320x320 42.5 3.98 3.36M mPelee-PRN 320x320 42.7 2.82 3.81M Model Size mAP@0.5 BFLOPs # Parameter Pelee-PRN 416x416 45.0 4.04 3.16M Pelee-YOLOv3 [1] 416x416 45.3 5.06 3.91M Pelee-FPN [4] 416x416 45.7 4.84 3.75M Pelee-PRN-3l 416x416 46.3 5.03 3.36M mPelee-PRN 416x416 46.8 4.76 3.81M ‘’‘’

WongKinYiu commented 5 years ago

@MichaelCong Hello,

Currently, we do not have plan to release Pelee-based cfg files. For Pelee-YOLOv3, you can try our partner's https://github.com/eric612/Yolo-Model-Zoo It trained by Caffe framework https://github.com/eric612/MobileNet-YOLO

MichaelCong commented 5 years ago

route 145 141 139 147 conv 1024 1 x 1/ 1 13 x 13 x2560 -> 13 x 13 x1024 0.886 BF 148 conv 256 3 x 3/ 1 13 x 13 x1024 -> 13 x 13 x 256 0.797 BF

你好,我按照pelee的模型用darknet实现了主干部分。在dense layer部分route时候你是怎么操作的。如果需要cfg文件我可以发到你邮箱。谢谢你帮助

WongKinYiu commented 5 years ago

@MichaelCong Hello,

you can follow the official cfg for densenet

# transition layer
[convolutional]
...

# base layer, assume this is i^th layer # 
[maxpool]
...

# denselayer 1
...

# feature layer 1, assume this is j^th layer #
[convolutional]
...
filters= # growth_rate

# merge layer, concatenate base layer with feature layers # 
[route]
layers=i,j

# denselayer 2
...

...
MichaelCong commented 5 years ago

好的,非常感谢你!我研究一下

WongKinYiu commented 5 years ago

@MichaelCong

If you want to merge multiple branches, you can follow following cfg:

# base layer, i^th layer
[convolutional]

# branch 1
[route]
layers=i

...

# feature layer 1, j^th layer
[convolutional]
...

# branch 2
[route]
layers=i

...

# feature layer 2, k^th layer
[convolutional]
...

# branch 3
[route]
layers=i

...

# feature layer 3, l^th layer
[convolutional]
...

# branch 4
[route]
layers=i

...

# feature layer 4, m^th layer
[convolutional]
...

# merge base layer and feature layers
[route]
layers=i,j,k,l,m
MichaelCong commented 5 years ago

这是我写的stem Block和Stage1的代码。你能帮我看看有问题吗

[net]
# Training
batch=64
subdivisions=16
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 200000
policy=steps
steps=100000,150000
scales=.1,.1
#########################
#backbone Pelee #
#########################
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=relu
#416*416*32

##########Stem Block###############
[convolutional]
batch_normalize=1
filters=32
size=3
stride=2
pad=1
activation=relu
#208*208*32

[convolutional]
batch_normalize=1
filters=16
size=1
stride=1
pad=0
activation=relu
#208*208*16

[convolutional]
batch_normalize=1
filters=32
size=3
stride=2
pad=1
activation=relu
#104*104*32

[route]
layers=-3

[maxpool]
stride=2
size=2
#104*104*32

[route]
layers=-1,-3
#104*104*64

[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=0
activation=relu
#104*104*32
########Stem Block End###########
#############Stage 1############
#DenseLayer 1
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=relu
#104*104*32

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-3

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=leaky
#104*104*64

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-1,-5,-7

#DenseLayer 2
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=relu
#104*104*32

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-3

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=leaky
#104*104*64

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-1,-5,-7

#DenseLayer 3
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=relu
#104*104*32

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-3

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=0
activation=leaky
#104*104*64

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
#104*104*16

[route]
layers=-1,-5,-7

#Transition Layer
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=0
activation=leaky

[maxpool]
stride=2
size=2
WongKinYiu commented 5 years ago

@MichaelCong Hello,

I think you put wrong parameter from the first layer... There is no additional layer before stem block, please check it carefully image

And parameters for the first dense layer are also incorrect. Maybe you can just follow the original caffe prototxt https://github.com/Robert-JunWang/Pelee/blob/master/model/voc/deploy_merged.prototxt

WongKinYiu commented 5 years ago

The author of Pelee says: "PeleeNet uses the different bottleneck width in the different stage, the figure is drawn based on 4x growth rate to make it is easier to compare to the original DenseNet. ... For each branch, It should be {k/2, k/2} in stage 1, {k, k/2} in stage 2, and {2k, k/2} in stage 3 and stage 4."