MarvinTeichmann / tensorflow-fcn

An Implementation of Fully Convolutional Networks in Tensorflow.
MIT License
1.1k stars 433 forks source link

Possible bug in the upsampling layer? #14

Closed patrickrbrandao closed 7 years ago

patrickrbrandao commented 7 years ago

Hello there,

I'm not sure if what I'm saying it's correct because I can not find hour the conv2d_transpose works. However, in your get_deconv_filter functions you are not filling the bilinear filters in all channels, is that intensional?

More specifically, when you do this: for i in range(f_shape[2]): weights[:, :, i, i] = bilinear

you are only filing "a diagonal" of filters, is that how it is suppose to be? Or should it be something like this: for i in range(f_shape[2]): for j in range(f_shape[3]): weights[:, :, i, j] = bilinear

Again, I'm not sure if this was intentional because I do not know how the conv2d_transpose works.

Patrick

MarvinTeichmann commented 7 years ago

The weight is a 4-d tensor mapping k input channels to k output channels. The layer is supposed to upsample each channel independently, those the i^th output channel should become an upsampled Version of the i^th input channel. The value of the i^th channel should not be influenced by the values of any other channel.

This is were the diagonale comes from. A bug is always possible though ;).

patrickrbrandao commented 7 years ago

Makes sense, thanks for explain it to me.