liuzhuang13 / DenseNet

Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).
BSD 3-Clause "New" or "Revised" License
4.71k stars 1.07k forks source link

Convolution after ReLU in Dense Layer Question #17

Closed cgarciae closed 7 years ago

cgarciae commented 7 years ago

I've seen that you use:

BN -> ReLU -> Conv3x3 -> Dropout

on the normal case, or

BN -> ReLU -> Conv1x1 -> Dropout -> BN -> ReLU -> Conv3x3 -> Dropout

when using bottleneck. The question is why? Most networks use e.g.

Conv3x3 -> BN -> ReLU -> Dropout

Why did you invert the order? Did you get better results this way?

Thanks in advance!

liuzhuang13 commented 7 years ago

Yes, we found using the current order gives a higher accuracy typically. The only difference between two orders in DenseNet is that, the first BN layer has scaling and shifting parameters which provide later layers different activation scales. If we use CONV first, the convolutions are forced to receive the same activations in different subsequent layers, which may not be a good thing for training.

cgarciae commented 7 years ago

@liuzhuang13 Thanks for the response! Excellent insight, so if I understand correctly:

I think you could mention this more in the paper. Reading it more closely you do reference the Microsoft paper but don't comment about it.

Thanks again!

liuzhuang13 commented 7 years ago

If by "upper layers" you mean "deeper layers (layers farther from input)", I think we understand it in the same way. Thanks for the suggestion! If there's a newer version of the paper we'll consider mentioning this more.