Closed gaopeng-eugene closed 7 years ago
Both questions are more relevant to the caffe-tensorflow repository: https://github.com/ethereon/caffe-tensorflow In short, two caffe layers (BatchNorm and Scale) are merged into one TF layer. The padding option used throughout the code is 'SAME'; as TF does not allow arbitrary padding, this may also explain the differences in performance.
thank you
In the original caffe code, there are batch normalization layer and scale layer. Which is the corresponding scale layer in your tensorflow implementation. Another question, how do you deal with the padding problem in your code.