Closed cgarciae closed 7 years ago
Yes, we found using the current order gives a higher accuracy typically. The only difference between two orders in DenseNet is that, the first BN layer has scaling and shifting parameters which provide later layers different activation scales. If we use CONV first, the convolutions are forced to receive the same activations in different subsequent layers, which may not be a good thing for training.
@liuzhuang13 Thanks for the response! Excellent insight, so if I understand correctly:
I think you could mention this more in the paper. Reading it more closely you do reference the Microsoft paper but don't comment about it.
Thanks again!
If by "upper layers" you mean "deeper layers (layers farther from input)", I think we understand it in the same way. Thanks for the suggestion! If there's a newer version of the paper we'll consider mentioning this more.
I've seen that you use:
on the normal case, or
when using bottleneck. The question is why? Most networks use e.g.
Why did you invert the order? Did you get better results this way?
Thanks in advance!